Conversation
- Removed redundant, commented code in `main.py` - Adjusted import statement for custom modules in `tests/python/test_utils.py` to reflect new page structure
- Removed redundant, commented code in `main.py` - Adjusted import statement for custom modules in `tests/python/test_utils.py` to reflect new page structure
Moving to multipage format required app restructuring. This commit adjusts the newly created navbar to the original tab style format. It also introduces Enums to store page urls. - add build_tabs() to build navbar tabs and pages - adjust navbar to original tab style format - add page_urls.py to store page urls using Enums - add reference to Enums in register_page definitions
- Adjusted the tabs behavior to mimic behaviour of original tabs, e.g., initial disabled tabs and style differences between active and inactive tabs; required changes in callbacks and CSS style definitions - Added multiple version for user survey alert (Dash Modal/Alert/Toast); final solution will be picked and finalized for next commit
- Fixed tabs layout behavior for smaller windows - Selected dbc.Toast for temporary survey alert - Adjust test files based on previous errors
- Removed redundant, commented code in `main.py` - Adjusted import statement for custom modules in `tests/python/test_utils.py` to reflect new page structure
- Removed redundant, commented code in `main.py` - Adjusted import statement for custom modules in `tests/python/test_utils.py` to reflect new page structure
Moving to multipage format required app restructuring. This commit adjusts the newly created navbar to the original tab style format. It also introduces Enums to store page urls. - add build_tabs() to build navbar tabs and pages - adjust navbar to original tab style format - add page_urls.py to store page urls using Enums - add reference to Enums in register_page definitions
- Adjusted the tabs behavior to mimic behaviour of original tabs, e.g., initial disabled tabs and style differences between active and inactive tabs; required changes in callbacks and CSS style definitions - Added multiple version for user survey alert (Dash Modal/Alert/Toast); final solution will be picked and finalized for next commit
- Fixed tabs layout behavior for smaller windows - Selected dbc.Toast for temporary survey alert - Adjust test files based on previous errors
|
Tip For best results, initiate chat on the files or code changes. @FedericoTartarini Hello! How can I help you today? I'm here to assist with code reviews, answer questions about the codebase, generate tests, help with documentation, or any other development-related tasks you might have for this PR. What would you like me to focus on? |
Change the label from "Wind Rose" to "Annual Wind Rose" for better clarity and context in the visualization.
Improve robustness of the update_outdoor_comfort_output function by catching KeyError and TypeError exceptions when counting zero values in specified columns. Added docstring for better understanding of function parameters and return value.
Add logging to capture exceptions during the EPW data retrieval process, improving error handling and debugging capabilities.
Refactor the code to maintain the addition of the root directory to sys.path, ensuring that the necessary modules can be imported correctly during test execution.
Specify the development branch for push and pull request events to ensure consistent testing during development.
Enhance the CI workflow by specifying the development branch for push and pull request events. Add Ruff checks and formatting steps to ensure code quality before tests are run.
Upgrade actions to latest versions and streamline deployment steps. Change project ID and image naming conventions for better clarity.
WalkthroughThe PR migrates the app to Dash page-based routing, centralizes configuration, replaces many my_project modules with pages.lib equivalents, introduces UnitSystem enums and timing decorators, updates CI/CD to Cloud Build and revised GitHub workflows, adjusts dependencies/tooling (Ruff, pre-commit), updates styles/docs, and bumps version to 0.9.0. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant User
participant Browser
participant DashApp as Dash App (DashProxy use_pages=True)
participant Layout as pages.lib.layout.build_tabs
participant Registry as dash.page_registry
participant Page as dash.page_container (active page)
participant Stores as dcc.Store(s)
User->>Browser: Open app URL
Browser->>DashApp: Request app
DashApp->>Layout: build_tabs()
Layout->>Registry: query registered pages
Registry-->>Layout: page metadata
Layout-->>DashApp: Banner + Nav + Stores + Page Container + Alert
DashApp-->>Page: render selected page.layout()
Note right of DashApp: Callbacks wired via dash_extensions.enrich @callback
sequenceDiagram
autonumber
participant GH as GitHub Actions
participant Auth as google-github-actions/auth
participant GCloud as google-github-actions/setup-gcloud
participant CB as Cloud Build
participant GCR as Container Registry
participant CR as Cloud Run
GH->>Auth: authenticate (credentials_json)
GH->>GCloud: setup-gcloud (project)
GH->>CB: gcloud builds submit --substitutions
CB->>GCR: push image
CB->>CR: gcloud run deploy (substitutions)
CR-->>GH: deployment result
sequenceDiagram
autonumber
participant Interval as dcc.Interval
participant Callback as display_alert
participant Toast as survey Toast
Interval-->>Callback: n_intervals
alt first tick
Callback->>Toast: set is_open = True
else subsequent ticks
Callback->>Toast: leave state
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120+ minutes Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 27
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (16)
assets/manifest.json (1)
469-494: PWA id should be stable; using a version string risks duplicate installs and broken update pathsThe Web App Manifest “id” identifies the app. Changing it per-release (e.g., "0.9.0") can cause browsers to treat new versions as distinct apps, leading to multiple installs or orphaned icons. Use a stable id that never changes (e.g., "clima" or a scoped path).
Apply:
- "id": "0.9.0", + "id": "clima",Duplicate "display" keys
There are two "display": "standalone" properties (Lines 469 and 489). JSON duplicate keys are ambiguous; the latter wins, but keeping both is confusing. Keep a single "display" and retain "display_override".
"display": "standalone", - "id": "0.9.0", + "id": "clima", "description": "CBE Clima Tool: a free and open-source web application for climate analysis tailored to sustainable building design", "start_url": "/", "scope": "/", "screenshots": [], "shortcuts": [], "url_handlers": [ { "origin": "https://clima.cbe.berkeley.edu/" } ], "lang": "en", "categories": [ "education", "utilities", "weather" ], "dir": "ltr", "prefer_related_applications": false, - "display": "standalone", "display_override": [ "window-controls-overlay", "standalone", "browser" ].github/workflows/cypress.yml (1)
21-25: Ensure dev dependencies are installed reproduciblypipenv install without --deploy/--dev may omit dev packages needed by tests and allow lock drift.
Prefer:
- pip install pipenv - pipenv install + pip install pipenv + pipenv sync --dev --verboseIf you don’t commit Pipfile.lock, use:
- pipenv install + pipenv install --dev --deploy.github/workflows/python.yml (1)
16-20: Synchronize CI Python version with .python-version (3.10)The repository’s .python-version file pins Python to 3.10, but the GitHub Actions workflow is installing 3.11, leading to environment drift. Please align them by either reading the version file or testing a matrix.
Affected file:
- .github/workflows/python.yml (lines 16–20)
Choose one of the following updates:
- Read from the version file (preferred):
- - name: Setup Python - uses: actions/setup-python@v4 - with: - python-version: '3.11' + - name: Setup Python + uses: actions/setup-python@v5 + with: + python-version-file: '.python-version'
- Expand to a version matrix:
jobs: pytest: runs-on: ubuntu-latest + strategy: + matrix: + python-version: ['3.10', '3.11'] @@ - - name: Setup Python - uses: actions/setup-python@v4 - with: - python-version: '3.11' + - name: Setup Python + uses: actions/setup-python@v5 + with: + python-version: ${{ matrix.python-version }}pages/wind.py (3)
403-415: wind-direction graph won’t update when the dataset changesThis callback doesn’t take df-store modified_timestamp as an Input, so changing/initializing the weather file may not refresh the wind-direction heatmap. Wind-speed includes the timestamp Input; this should too for parity.
-@callback( - Output("wind-direction", "children"), - # General - [ - Input("global-local-radio-input", "value"), - ], +@callback( + Output("wind-direction", "children"), + [ + Input("df-store", "modified_timestamp"), + Input("global-local-radio-input", "value"), + ], [ State("df-store", "data"), State("meta-store", "data"), State("si-ip-unit-store", "data"), ], ) -def update_tab_wind_direction(global_local, df, meta, si_ip): +def update_tab_wind_direction(_, global_local, df, meta, si_ip):
629-633: Copy/paste bug: noon_df uses morning_times instead of noon_timesThis skews noon counts and captions.
- noon_df = df.loc[ - (df["hour"] >= morning_times[0]) & (df["hour"] <= morning_times[1]) - ] + noon_df = df.loc[ + (df["hour"] >= noon_times[0]) & (df["hour"] <= noon_times[1]) + ]
443-472: Fix redundant hour filtering and off-by-one boundary inupdate_custom_wind_roseThe current implementation in
pages/wind.pyboth slices the DataFrame by hour (inclusive) and then passes it towind_rose, which applies its own hour slicing using a strict “> start_hour” lower bound. As a result, any observations exactly atstart_hourare dropped silently, and the double‐filter is wasteful. You should choose one authoritative filter location and ensure the bounds are correct:• If you let
wind_rosedo all slicing:
– Remove lines 458–461 inupdate_custom_wind_rose(the local hour filter).
– Leave the call aswind_rose(df, ..., [start_month, end_month], [start_hour, end_hour], ...).pages/wind.py (around line 458) - if start_hour <= end_hour: - df = df.loc[(df["hour"] >= start_hour) & (df["hour"] <= end_hour)] - else: - df = df.loc[(df["hour"] <= end_hour) | (df["hour"] >= start_hour)] custom = wind_rose( df, "", [start_month, end_month], [start_hour, end_hour], True, si_ip )• If you prefer slicing in
update_custom_wind_rose:
- Keep the local hour filter as-is (inclusive both ends).
- Change
wind_rose(inpages/lib/template_graphs.py) to use>= start_hourinstead of> start_hourso it won’t drop the lower bound again:pages/lib/template_graphs.py (around line 480) - if start_hour <= end_hour: - df = df.loc[(df["hour"] > start_hour) & (df["hour"] <= end_hour)] + if start_hour <= end_hour: + df = df.loc[(df["hour"] >= start_hour) & (df["hour"] <= end_hour)]Either approach eliminates the redundant slice and guarantees that data at
start_houris included.pages/lib/charts_data_explorer.py (3)
37-39: Guard lookups of filter metadata behind data_filter.filter_name and filter_unit are computed even when data_filter is False, which can break if filter_var is None or not meaningful.
Apply this diff to scope those lookups:
- filter_name = mapping_dictionary[filter_var]["name"] - filter_unit = mapping_dictionary[filter_var][si_ip]["unit"] + if data_filter: + filter_name = mapping_dictionary[filter_var]["name"] + filter_unit = mapping_dictionary[filter_var][si_ip]["unit"]
24-28: Avoid SettingWithCopy and in-place mutation; use .loc on a working copy.Chained assignment can be ignored by pandas and mutating df in place can have side effects for callers.
Apply this diff to fix both:
- if data_filter: - if min_val <= max_val: - mask = (df[filter_var] < min_val) | (df[filter_var] > max_val) - df[var][mask] = None - else: - mask = (df[filter_var] >= max_val) & (df[filter_var] <= min_val) - df[var][mask] = None + if data_filter: + df = df.copy() + if min_val <= max_val: + mask = (df[filter_var] < min_val) | (df[filter_var] > max_val) + else: + mask = (df[filter_var] >= max_val) & (df[filter_var] <= min_val) + df.loc[mask, var] = None
129-134: Filtering sets entire rows to None; prefer boolean subsetting or targeted columns.df.loc[mask] = None wipes every column, which is heavy-handed and can cause unexpected NaNs. Use a filtered DataFrame for plotting.
Apply this diff to create a filtered view:
- if data_filter: - if min_val <= max_val: - df.loc[(df[filter_var] < min_val) | (df[filter_var] > max_val)] = None - else: - df.loc[(df[filter_var] >= max_val) & (df[filter_var] <= min_val)] = None + df_plot = df + if data_filter: + if min_val <= max_val: + mask = (df[filter_var] >= min_val) & (df[filter_var] <= max_val) + else: + mask = (df[filter_var] < min_val) & (df[filter_var] > max_val) + df_plot = df.loc[mask].copy()And adjust the scatter call to use df_plot:
- fig = px.scatter( - df, + fig = px.scatter( + df_plot,pages/lib/import_one_building_files.py (2)
17-31: Harden parsing: multiple regex .findall()[0] calls can raise IndexErrorAccessing [0] assumes every Placemark has coordinates, URL, and name. Real-world KML often has missing tags, which will crash the import.
Safer extraction using re.search with fallbacks:
- # lat - location_info.append( - re.findall(r"<coordinates>(.+?)<\/coordinates>", location)[0].split(",")[0] - ) - # long - location_info.append( - re.findall(r"<coordinates>(.+?)<\/coordinates>", location)[0].split(",")[1] - ) - # name - location_info.append(re.findall(r"<name>(.+?)<\/name>", location)[0]) - # url - location_info.append( - "<a href=" - + re.findall(r"<td>URL (.+?)<\/td>", location)[0] - + ' style="color: #fff">Climate.OneBuilding.Org</a>' - ) + # coordinates: KML order is lon,lat,alt + m_coord = re.search(r"<coordinates>(.+?)<\/coordinates>", location) + lon, lat = (None, None) + if m_coord: + parts = m_coord.group(1).split(",") + if len(parts) >= 2: + lon, lat = parts[0], parts[1] + location_info.append(lon) + location_info.append(lat) + # name + m_name = re.search(r"<name>(.+?)<\/name>", location) + location_info.append(m_name.group(1) if m_name else "not available") + # url + m_url = re.search(r"<td>URL (.+?)<\/td>", location) + url = m_url.group(1) if m_url else "#" + location_info.append(f'<a href={url} style="color: #fff">Climate.OneBuilding.Org</a>')Also, the comments currently say “lat/long” but the code stores lon first. The updated block aligns comments with KML ordering.
64-77: Remove trailing space from the “1% Cooling DB” column nameThe header
"1% Cooling DB "currently includes a trailing space in two places and should be updated to"1% Cooling DB"to prevent subtle downstream bugs:
- pages/lib/import_one_building_files.py, around line 75:
Change- "1% Cooling DB ", + "1% Cooling DB",- pages/select.py, around line 360:
Change- "1% Cooling DB ", + "1% Cooling DB",No other code references the exact
"1% Cooling DB "string; the HTML-parsing split in import_one_building_files.py (line 57) uses the literal"1% Cooling DB <b>"and remains unaffected. After making these two edits, run any existing tests or scripts to verify that column lookups and UI views continue to function as expected.pages/lib/charts_sun.py (3)
27-33: Coerce si_ip to UnitSystem for backward compatibility and avoid silent breakagesOlder call sites may still pass "si"/"ip" strings. Since mapping_dictionary now uses UnitSystem keys, string keys will break lookups and the equality checks below.
Add a small adapter at the start of monthly_solar:
def monthly_solar(epw_df, si_ip): + # Back-compat: accept both UnitSystem and string values + if isinstance(si_ip, str): + si_ip = UnitSystem(si_ip) @@ - if si_ip == UnitSystem.SI: + if si_ip == UnitSystem.SI: fig.update_yaxes(range=[0, 1000]) - if si_ip == UnitSystem.IP: + if si_ip == UnitSystem.IP: fig.update_yaxes(range=[0, 400])Apply the same coercion in polar_graph and custom_cartesian_solar (see below).
Also applies to: 98-101
117-129: Guard against zero-variance var when computing marker_sizeIf vals.max() == vals.min(), the division will raise ZeroDivisionError or yield inf.
- vals = solpos[var] - marker_size = ((vals - vals.min()) / (vals.max() - vals.min()) + 1) * 4 + vals = solpos[var] + denom = float(vals.max() - vals.min()) + scale = ((vals - vals.min()) / denom) if denom > 0 else 0 + marker_size = (scale + 1) * 4Apply the same fix in custom_cartesian_solar (Lines 338–340).
Also applies to: 139-145
320-333: Also coerce si_ip in custom_cartesian_solar and polar_graphSame back-compat concern as monthly_solar; add coercion at the top of both functions.
def polar_graph(df, meta, global_local, var, si_ip): + if isinstance(si_ip, str): + si_ip = UnitSystem(si_ip) @@ def custom_cartesian_solar(df, meta, global_local, var, si_ip): + if isinstance(si_ip, str): + si_ip = UnitSystem(si_ip)pages/t_rh.py (1)
182-205: Fix the output declaration inconsistency.The callback decorator wraps a single output in a list
[Output("heatmap", "children")], but the function returns a singledcc.Graphcomponent rather than a list. This inconsistency should be addressed for clarity.Apply this diff to fix the output declaration:
-@callback( - [Output("heatmap", "children")], +@callback( + Output("heatmap", "children"),pages/outdoor.py (1)
202-211: Unusual tuple return from layout function.The
layout()function returns a tuple containing a single element (note the comma after the closing parenthesis on Line 211). This is likely unintentional and could cause issues with Dash's page rendering.Apply this diff to fix the return value:
def layout(): - return ( + return ( dcc.Loading( type="circle", children=html.Div( className="container-col", children=[inputs_outdoor_comfort(), outdoor_comfort_chart()], ), ), - ) + )Or more simply:
def layout(): - return ( - dcc.Loading( - type="circle", - children=html.Div( - className="container-col", - children=[inputs_outdoor_comfort(), outdoor_comfort_chart()], - ), - ), - ) + return dcc.Loading( + type="circle", + children=html.Div( + className="container-col", + children=[inputs_outdoor_comfort(), outdoor_comfort_chart()], + ), + )
🧹 Nitpick comments (67)
docs/documentation/tabs-explained/data-explorer.md (2)
17-22: Fix heading level and linter warning; add accessible fallback link
- MD001: use “## Video Tutorial” after an H1.
- MD034: provide a clickable link fallback alongside the embed for linters and accessibility.
-### Video Tutorial +## Video Tutorial Learn more about the Data Explorer tab by watching the following video. -{% embed url="https://youtu.be/VJ_wOHadVdw?si=1iXC2GvWPdPBGzMc&t=825" %} +{% embed url="https://youtu.be/VJ_wOHadVdw?si=1iXC2GvWPdPBGzMc&t=825" %} + +If the embed does not load, watch on YouTube: [Data Explorer walkthrough](https://youtu.be/VJ_wOHadVdw?si=1iXC2GvWPdPBGzMc&t=825).
11-11: Minor copy edit opportunityConsider “heatmap” (one word) for consistency with later usage in the doc.
-The single-variable analysis allows data to be displayed in 4 outputs: a yearly chart, monthly daily charts, an annual heatmap chart, and a descriptive statistics table. +The single-variable analysis allows data to be displayed in four outputs: a yearly chart, monthly daily charts, an annual heatmap, and a descriptive statistics table..bumpversion.cfg (1)
2-2: Confirm version source of truth and release flowSetting current_version to 0.9.0 is fine post-bump, but ensure this is the single source of truth driving any user-facing version strings, packaging metadata, and docs. If other files hold a version (pyproject.toml, setup.cfg, package.json, docs), add them here to avoid drift.
Would you like me to scan the repo for additional version strings and propose a consolidated bumpversion config?
.github/workflows/cypress.yml (2)
26-29: Wait for the app to be ready before running CypressStarting the server in background without readiness checks can cause flaky CI. Add an explicit wait on the app URL.
Add a step after “Start Clima”:
- name: Start Clima run: |- pipenv run python main.py & + - name: Wait for Clima + run: npx --yes wait-on http://localhost:8050 + - name: Setup NodeAdjust the URL/port if different in your app config.
Also applies to: 41-44
46-51: Optional: set a retention period for screenshots to match videosKeep artifact retention consistent to control storage.
- name: Archive screenshots uses: actions/upload-artifact@v4 if: always() with: name: cypress-screenshots path: tests/node/cypress/screenshots + retention-days: 3docs/contributing/contributing.md (1)
19-24: Capitalize “Ruff” consistently in proseMinor editorial nit: capitalize the tool name “Ruff” (proper noun) for consistency with the rest of the repo and common usage.
-We use ruff to enforce the code style and code formatting. You can run it with: +We use Ruff to enforce the code style and code formatting. You can run it with:.dockerignore (1)
16-18: Remove duplicate ignores for node_modulesnode_modules is listed twice. Keep one entry to reduce clutter.
-tests -node_modules +tests .my_cache @@ -# Dependency directories -node_modules/ +# Dependency directories +# node_modules/ already ignored aboveAlso applies to: 133-134
docs/documentation/weather-file-repositories.md (2)
3-3: Grammar: add comma and articleReadability tweak: add “the” and a comma after the introductory clause.
-In addition to the data from [Energy Plus](https://energyplus.net/weather) and [Climate.One.Building.org](http://climate.onebuilding.org/) CBE Clima Tool allows users to visualize any valid EPW file. Below we list some free sources from which climate files can be obtained. +In addition to the data from [Energy Plus](https://energyplus.net/weather) and [Climate.One.Building.org](http://climate.onebuilding.org/), the CBE Clima Tool allows users to visualize any valid EPW file. Below, we list some free sources from which climate files can be obtained.
12-12: Consistent bullet punctuationThis list mixes trailing semicolons and no punctuation. Prefer no punctuation or periods uniformly. Suggest removing semicolons.
-* [Climate.OneBuilding](https://climate.onebuilding.org/); +* [Climate.OneBuilding](https://climate.onebuilding.org/)Apply similarly to the other list items for consistency.
.github/workflows/python.yml (2)
21-25: Prefer reproducible installs with the lockfilePipenv install may resolve anew. Use
pipenv sync --devto install exactly what’s in Pipfile.lock.- - name: Build Clima + - name: Build Clima (reproducible) run: |- pip install pipenv - pipenv install --dev + pipenv sync --dev
21-25: Speed up with cachingAdd cache for Pipenv’s virtualenvs and pip wheels.
- name: Build Clima run: |- pip install pipenv pipenv install --dev + - name: Cache Pip / Pipenv + uses: actions/cache@v4 + with: + path: | + ~/.cache/pip + ~/.local/share/virtualenvs + key: ${{ runner.os }}-pipenv-${{ hashFiles('**/Pipfile.lock') }} + restore-keys: | + ${{ runner.os }}-pipenv-tests/node/cypress/e2e/spec.cy.js (2)
13-13: Use Cypress baseUrl to avoid hardcoding localhost.Hardcoding
http://127.0.0.1:8080couples tests to a single host/port. PreferbaseUrlin Cypress config and visit relative paths.Apply this diff here, and set
baseUrlincypress.config.(js|ts):- cy.visit('http://127.0.0.1:8080'); + cy.visit('/');Do the same for Line 100. Example config:
// cypress.config.js module.exports = { e2e: { baseUrl: 'http://localhost:8080' } }Also applies to: 100-100
11-97: Reduce duplication and tighten assertion scopes to improve stability.
- Repeated visit/upload can move to
beforeEach.- Many text-only assertions are global; scoping to sections (e.g., within a container or heading) avoids false positives.
- Short labels like “IP” are ambiguous; assert within the banner controls container.
I can provide a scoped refactor that introduces
beforeEach, useswithin, and adds a small helper to assert inside sections—say the word and I’ll draft it.Also applies to: 99-109
cloudbuild.yaml (1)
1-15: Optional: adopt Artifact Registry and add step ids for better logs.Container Registry (gcr.io) is deprecated; Artifact Registry is the recommended target. Adding
id:labels improves Cloud Build logs.If/when you migrate, a minimal change would be to switch
_GCRto your AR host, and add ids:steps: -# Build the container image -- name: 'gcr.io/cloud-builders/docker' + # Build the container image + - id: 'build' + name: 'gcr.io/cloud-builders/docker' args: ['build', '-t', '$_GCR/$_PROJ_NAME/$_REPO_NAME/$_IMG_NAME', '.'] -# Push the container image to Container Registry -- name: 'gcr.io/cloud-builders/docker' + # Push the container image to registry + - id: 'push' + name: 'gcr.io/cloud-builders/docker' args: ['push', '$_GCR/$_PROJ_NAME/$_REPO_NAME/$_IMG_NAME'] -# Deploy container image to Cloud Run -- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk' + # Deploy container image to Cloud Run + - id: 'deploy' + name: 'gcr.io/google.com/cloudsdktool/cloud-sdk' entrypoint: 'gcloud' args: ['run', 'deploy', '$_REPO_NAME', '--image', '$_GCR/$_PROJ_NAME/$_REPO_NAME/$_IMG_NAME', '--region', '$_REGION', '--memory', '$_MEMORY', '--cpu', '$_CPU']And for AR, set
_GCRlikeus-central1-docker.pkg.dev/<PROJECT>/<REPO>.docs/contributing/run-project-locally.md (1)
100-102: Docs: mark deployment blocks as bash and consider DRYing substitutions.Using
textfences removes shell highlighting/copy convenience. Also, the two submit commands only differ by_REPO_NAME/_IMG_NAME; consider documenting them as variables.Apply this diff to switch to bash fences:
-```text +```bash gcloud builds submit --project=clima-316917 --substitutions=_REPO_NAME="clima-test",_PROJ_NAME="clima-316917",_IMG_NAME="test",_GCR="us.gcr.io",_REGION="us-central1",_MEMORY="4Gi",_CPU="2"@@
-text +bash
gcloud builds submit --project=clima-316917 --substitutions=_REPO_NAME="clima",_PROJ_NAME="clima-316917",_IMG_NAME="main",_GCR="us.gcr.io",_REGION="us-central1",_MEMORY="4Gi",_CPU="2"If helpful, I can propose a small shell snippet that sets the shared substitutions once and reuses them.
Also applies to: 104-108
README.md (4)
3-3: Prefer HTTPS for the public site link.Small polish and avoids mixed-content warnings.
Apply this diff:
-The CBE Clima Tool is a web-based application built to support climate analysis specifically designed to support the need of architects and engineers interested in climate-adapted design. It can be freely accessed at [clima.cbe.berkeley.edu](http://clima.cbe.berkeley.edu). +The CBE Clima Tool is a web-based application built to support climate analysis specifically designed to support the need of architects and engineers interested in climate-adapted design. It can be freely accessed at [clima.cbe.berkeley.edu](https://clima.cbe.berkeley.edu).
5-5: Standardize “EnergyPlus” spelling across the README.The README mixes “Energy Plus” and “EnergyPlus.” The official name is “EnergyPlus.”
Apply this diff:
-It allows users to analyze the climate data of more than 27,500 locations worldwide from both [Energy Plus](https://energyplus.net/weather) and [Climate.One.Building.org](http://climate.onebuilding.org/). You can, however, also choose to upload your own EPW weather file. +It allows users to analyze the climate data of more than 27,500 locations worldwide from both [EnergyPlus](https://energyplus.net/weather) and [Climate.OneBuilding.org](http://climate.onebuilding.org/). You can, however, also choose to upload your own EPW weather file. @@ -Our tool can be used to analyze and visualize data contained in EnergyPlus Weather (EPW) files. +Our tool can be used to analyze and visualize data contained in EnergyPlus Weather (EPW) files. @@ -* **Extensive Weather Data:** Access weather files for over 27,500 locations from EnergyPlus and Climate.One.Building.org. +* **Extensive Weather Data:** Access weather files for over 27,500 locations from EnergyPlus and Climate.OneBuilding.org.Also applies to: 7-7, 12-12
25-25: Wrap the DOI as a Markdown link to avoid “bare URL” lint.Apply this diff:
-Betti, G., Tartarini, F. & Schiavon, S. CBE Clima Tool: a free and open-source web application for climate analysis for architects and building engineers. *Build. Simul.* (2023). https://doi.org/10.1007/s12273-023-1090-5 +Betti, G., Tartarini, F. & Schiavon, S. CBE Clima Tool: a free and open-source web application for climate analysis for architects and building engineers. *Build. Simul.* (2023). [https://doi.org/10.1007/s12273-023-1090-5](https://doi.org/10.1007/s12273-023-1090-5)
11-11: Minor wording polish.“A variety of” is a bit wordy; “various” is tighter.
Apply this diff:
-* **Interactive Climate Analysis:** Visualize EPW weather data through a variety of interactive charts. +* **Interactive Climate Analysis:** Visualize EPW weather data through various interactive charts..github/workflows/deploy.yml (1)
27-32: Optional: add workflow_dispatch and concurrency; consider WIF instead of JSON key.
- Add
workflow_dispatch:so you can trigger redeploys manually.- Add a
concurrencygroup to avoid overlapping deploys.- If possible, replace long-lived JSON keys with Workload Identity Federation.
I can open a follow-up PR with:
on: [push, workflow_dispatch]concurrency: group: deploy-production, cancel-in-progress: true- WIF setup (permissions:
id-token: write; useworkload_identity_provider+service_accountin the auth step).requirements.txt (2)
2-2: Don’t ship dev/test tooling in production requirements. Move to dev-only.Black, Ruff, Pytest, jsbeautifier, Pygments, pathspec, platformdirs, EditorConfig, pluggy, iniconfig, and mypy_extensions are developer/test utilities. Installing them in production increases image size, attack surface, and dependency churn. Keep them in Pipfile [dev-packages] (already present) or a separate requirements-dev.txt used by CI only.
Apply this diff to trim prod requirements (verify nothing at runtime imports these before merging):
-black==25.1.0 -EditorConfig==0.17.1 -iniconfig==2.1.0 -jsbeautifier==1.15.4 -mypy_extensions==1.1.0 -pathspec==0.12.1 -platformdirs==4.3.8 -Pygments==2.19.2 -pytest==8.4.1 -ruff==0.12.7 -pluggy==1.6.0If you need a dev lockfile, I can generate a requirements-dev.txt from Pipfile[dev-packages].
Also applies to: 19-19, 25-25, 28-28, 32-32, 39-40, 42-42, 44-45, 51-51
12-14: Remove redundant Dash component packages (bundled in Dash 2+).dash-core-components, dash-html-components, and dash-table are bundled with Dash since 2.0 and need not be installed separately. Keeping them risks version drift and import confusion.
-dash-core-components==2.0.0 -dash-html-components==2.0.0 -dash-table==5.0.0Also applies to: 17-17
assets/tabs.css (3)
60-68: Add a fallback for browsers/environments without :has() support.:has() is broadly supported now, but a graceful fallback avoids regressions and helps older WebViews.
-.custom-tab:has(.active) { +.custom-tab:has(.active), +/* Fallback for legacy markup/classes */ +.custom-tab--selected { color:#586069; background-color: white; box-shadow: 1px 1px 0 white; border-left: 1px solid lightgrey !important; border-right: 1px solid lightgrey !important; border-top: 6px solid #abd2ff !important; border-bottom: 1px solid transparent; -} +}
70-96: Scope “nav-pills” overrides to this app section to avoid fighting Bootstrap defaults.Overriding the global .nav-pills rules may have unintended side effects elsewhere (dbc ships Bootstrap’s .nav-pills). Scoping under the local container avoids CSS collisions.
-.nav-pills { +#tabs-parent .nav-pills { display: flex; flex-wrap: wrap; } -@media (max-width: 900px) { - .nav-pills { +@media (max-width: 900px) { + #tabs-parent .nav-pills { flex-direction: column; } } -.nav-pills .nav-link { +#tabs-parent .nav-pills .nav-link { padding: 0; color: #586069; font-family: "system-ui"; background-color: transparent; } -.nav-pills .nav-link.disabled { +#tabs-parent .nav-pills .nav-link.disabled { color: #c8c6c6; } -.nav-pills .nav-link.active { +#tabs-parent .nav-pills .nav-link.active { color: black; background-color: white; }
26-28: Deduplicate #store-container declarations.The selector appears twice; combine for clarity and to avoid accidental drift.
-#store-container { - padding: 0; -} +/* Store container */ +#store-container { + padding: 0; + height: 100%; +} -#store-container { - height: 100%; -}Also applies to: 107-109
Pipfile (1)
7-7: Pin unversioned packages and align pins with requirements.txt.
- dash: prefer explicit patch version (2.15.0) to reduce nondeterminism.
- dash-iconify: pin to 0.1.2 (already pinned in requirements.txt).
- pytest: pin to 8.4.1 to match CI and avoid plugin breakages.
-dash = "==2.15" +dash = "==2.15.0" @@ -dash-iconify = "*" +dash-iconify = "==0.1.2" @@ -pytest = "*" +pytest = "==8.4.1"If Pipenv is the source of truth, I can generate an updated requirements.txt via pip-compile or pipenv lock to keep both files consistent.
Also applies to: 17-18, 22-22
config.py (3)
9-13: Prefer environment-driven DEBUG instead of OS sniffing.Tying DEBUG to macOS can surprise Linux devs and leak debug in atypical deploys. Use an env flag with a safe default (and keep the macOS convenience if desired).
-from enum import Enum -import platform +from enum import Enum +import os +import platform @@ - DEBUG: bool = "macOS" in platform.platform() + # Enable via CLIMA_DEBUG=1/true/yes (fallback: enable on macOS for convenience) + DEBUG: bool = os.getenv("CLIMA_DEBUG", "").lower() in {"1", "true", "yes"} or ("macOS" in platform.platform())I can also wire this into app.py to gate analytics and other prod-only features.
1-1: Use StrEnum (Py 3.11+) for string enums.Since we’re on Python 3.11, StrEnum simplifies string-enum behavior and avoids subclassing str explicitly.
-from enum import Enum +from enum import StrEnum @@ -class UnitSystem(str, Enum): +class UnitSystem(StrEnum): @@ -class PageUrls(str, Enum): +class PageUrls(StrEnum): @@ -class DocLinks(str, Enum): +class DocLinks(StrEnum):This is a no-op for behavior but clearer and type-checker friendly.
Also applies to: 16-21, 23-40, 82-97
45-46: Consider using pathlib for file/asset paths.Hard-coded strings work, but pathlib yields safer joins if we later relocate files or compute absolute paths.
Happy to patch this if you want a quick refactor.
Also applies to: 51-52
app.py (2)
34-36: Gate third‑party analytics scripts behind a prod check.Avoid loading analytics in DEBUG/dev to keep local runs clean and privacy-friendly.
- <!-- Siteimprove Analytics --> - <script async src="https://siteimproveanalytics.com/js/siteanalyze_6294756.js"></script> + <!-- Siteimprove Analytics (injected only in production via app.py) -->Add this Python snippet to build index_string conditionally (outside this hunk):
# in app.py, after creating `app` from config import AppConfig siteimprove = ( '<script async src="https://siteimproveanalytics.com/js/siteanalyze_6294756.js"></script>' if not AppConfig.DEBUG else "" ) app.index_string = app.index_string.replace( "<!-- Siteimprove Analytics (injected only in production via app.py) -->", siteimprove )If you prefer Jinja templating, I can rewrite index_string to accept a variable and set it via a template renderer.
45-45: Use a direct image URL for Open Graph.The current og:image points to a GitHub HTML page, not the actual PNG. Social scrapers won’t render it.
- <meta property="og:image" content="https://github.com/CenterForTheBuiltEnvironment/clima/tree/main/assets/og-image.png"> + <meta property="og:image" content="https://raw.githubusercontent.com/CenterForTheBuiltEnvironment/clima/main/assets/og-image.png">Alternatively, host it from your domain (e.g., https://clima.cbe.berkeley.edu/assets/og-image.png) for more reliable scraping.
pages/changelog.py (2)
8-12: Register page with an explicit menu orderAdd order=PageInfo.CHANGELOG_ORDER so the page appears in the intended position in the nav alongside other pages using PageInfo.*_ORDER.
dash.register_page( __name__, name=PageInfo.CHANGELOG_NAME, path=PageUrls.CHANGELOG.value, + order=PageInfo.CHANGELOG_ORDER, )
15-23: Handle missing/invalid CHANGELOG gracefully and specify UTF-8Reading the file on every layout call is fine (keeps it live), but specify encoding and surface a friendly message if the file is missing or unreadable.
def layout(): """changelog page""" - with open(FilePaths.CHANGELOG, "r") as f: - changelog_content = f.read() + try: + with open(FilePaths.CHANGELOG, "r", encoding="utf-8") as f: + changelog_content = f.read() + except FileNotFoundError: + changelog_content = "# Changelog not found\n\nThe changelog file could not be located." + except Exception as e: + changelog_content = f"# Error loading changelog\n\n{e}" return dbc.Container( className="container p-4", children=[dcc.Markdown(changelog_content)], )Would you like me to add a minimal unit test that asserts the layout can render when the file is missing by monkeypatching FilePaths.CHANGELOG?
pages/not_found_404.py (1)
25-27: Confirm Lottie asset path works in productionAssets.NOT_FOUND_ANIMATION points to an absolute path “/assets/animations/page_not_found.json”. If the app is served behind a subpath, absolute URLs may 404; consider url_pathname_prefix-aware paths or using app.get_asset_url.
If needed:
- url=Assets.NOT_FOUND_ANIMATION, + url=dash.get_asset_url("animations/page_not_found.json"),pages/wind.py (3)
495-507: Seasonal hour range starts at 01:00 but wind_rose excludes the start hourhours = [1, 24] combined with wind_rose’s “> start_hour” yields 02:00–24:00. Decide on inclusivity and align. If you adopt inclusive start, adjust wind_rose; else set hours=[0,24] or [0,23] accordingly and update captions.
641-646: Caption percent spacing inconsistency (“%.” vs. “ %.” elsewhere)For consistency with seasonal captions, include a space before the percent sign or standardize across the app.
- f"{str(int(100 * (count / 8760)))}%. {calm_count} " + f"{str(int(100 * (count / 8760)))} %. {calm_count} "
369-374: Confirm annual hour boundariesannual = wind_rose(df, "", [1, 12], [1, 24], True, si_ip) will exclude hour 1 given current wind_rose logic. Aligning this with your chosen hour-domain decision will avoid silent off-by-one.
pages/sun.py (3)
167-195: Unit label logic is clear; consider reusing UnitSystem values directlyThis is fine as-is. If you later centralize unit strings (e.g., in config), update here to avoid hardcoding.
255-268: Use generate_custom_inputs(var) for consistent filenamesElsewhere you use generate_custom_inputs(var) to turn var into a nice title-cased token for image export. Here you embed raw var. For consistency across tabs, prefer the helper.
- custom_inputs = "" if var == "None" else f"{var}" + custom_inputs = "" if var == "None" else generate_custom_inputs(var)
226-238: Month tick values: verify axis domain matches tickvals 0–11You set tickvals=np.arange(0, 12, 1) and ticktext=month_lst. If the x-axis is 1–12 (months), ticks will be shifted. If barchart uses 0–11 internally, this is fine; otherwise set tickvals to 1–12.
- cover.update_xaxes( - dict(tickmode="array", tickvals=np.arange(0, 12, 1), ticktext=month_lst) - ) + cover.update_xaxes( + dict(tickmode="array", tickvals=list(range(1, 13)), ticktext=month_lst) + )If barchart already emits 0–11, ignore this suggestion.
docs/documentation/tabs-explained/wind/README.md (1)
15-20: Fix markdownlint issues: heading level and bare URL
- Heading jumps from H1 to H3; use H2.
- The embed line is flagged as a bare URL by markdownlint. Either add a proper linked caption or disable MD034 around the embed.
-### Video Tutorial +## Video Tutorial Learn more about the Wind tab by watching the following video. -{% embed url="https://youtu.be/VJ_wOHadVdw?si=W90QpOb4VoqUPcbA&t=524" %} +<!-- markdownlint-disable MD034 --> +{% embed url="https://youtu.be/VJ_wOHadVdw?si=W90QpOb4VoqUPcbA&t=524" %} +<!-- markdownlint-enable MD034 --> + +Watch on YouTube: [Wind tab tutorial](https://youtu.be/VJ_wOHadVdw?si=W90QpOb4VoqUPcbA&t=524)pages/lib/charts_data_explorer.py (2)
92-93: DOY axis upper bound should allow leap years or derive from data.Hard-coding 365 may truncate data for leap years.
Use the data-driven max:
- yaxis=dict(range=(1, 24)), - xaxis=dict(range=(1, 365)), + yaxis=dict(range=(int(df["hour"].min()), int(df["hour"].max()))), + xaxis=dict(range=(int(df["DOY"].min()), int(df["DOY"].max()))),
1-4: Duplicate math imports; standardize usage.Both from math import ceil, floor and import math are present. Keep one style for clarity.
Apply this diff:
- from math import ceil, floor - -import numpy as np -import math +import numpy as np +import math @@ - data_max = 5 * ceil(df[var].max() / 5) - data_min = 5 * floor(df[var].min() / 5) + data_max = 5 * math.ceil(df[var].max() / 5) + data_min = 5 * math.floor(df[var].min() / 5)Also applies to: 45-46, 123-124
docs/documentation/tabs-explained/natural-ventilation.md (2)
29-33: Fix heading level and quiet markdownlint for GitBook embed URL.Use h2 to satisfy MD001 and silence MD034 around the embed, which requires a bare URL.
Apply this diff:
-### Video Tutorial +## Video Tutorial @@ -{% embed url="https://youtu.be/VJ_wOHadVdw?si=_cUoFQGyxJD7V85a&t=703" %} +<!-- markdownlint-disable-next-line MD034 --> +{% embed url="https://youtu.be/VJ_wOHadVdw?si=_cUoFQGyxJD7V85a&t=703" %}
37-37: Remove stray HTML entity.The standalone produces an unnecessary blank entity in the rendered page.
Apply this diff:
-  +docs/documentation/tabs-explained/psychrometric-chart/README.md (1)
21-25: Fix heading level and quiet markdownlint for embed URL.Use h2 and disable MD034 for the GitBook embed line.
Apply this diff:
-### Video Tutorial +## Video Tutorial @@ -{% embed url="https://youtu.be/VJ_wOHadVdw?si=iAcBQpq3IgCNY-H6&t=582" %} +<!-- markdownlint-disable-next-line MD034 --> +{% embed url="https://youtu.be/VJ_wOHadVdw?si=iAcBQpq3IgCNY-H6&t=582" %}docs/documentation/tabs-explained/temperature-and-humidity/README.md (1)
23-27: Fix heading level and quiet markdownlint for embed URL.Consistent with other pages; embed requires a bare URL.
Apply this diff:
-### Video Tutorial +## Video Tutorial @@ -{% embed url="https://youtu.be/VJ_wOHadVdw?si=a1lgX6Lpt8fUXiCr&t=433" %} +<!-- markdownlint-disable-next-line MD034 --> +{% embed url="https://youtu.be/VJ_wOHadVdw?si=a1lgX6Lpt8fUXiCr&t=433" %}docs/documentation/tabs-explained/outdoor-comfort/README.md (3)
5-5: Tighten wording: “aims to be the measure” → “is a measure”.Improves clarity and avoids implying an unfulfilled aim.
Apply this diff:
-The [Universal Thermal Climate Index](http://www.utci.org/index.php) (UTCI), introduced in 1994, aims to be the measure of human physiological reaction to the atmospheric environment. +The [Universal Thermal Climate Index](http://www.utci.org/index.php) (UTCI), introduced in 1994, is a measure of human physiological response to the atmospheric environment.
16-16: Polish sentence following the bullet list.Minor grammar tweak for readability.
Apply this diff:
-to calculate a reference environmental temperature causing strain when compared to an individual's response to the real environment. It is based on [Fiala et al](https://link.springer.com/article/10.1007/s00484-011-0424-7).'s multi-node model of thermo-regulation. +to calculate a reference environmental temperature that reflects thermal strain when compared to an individual's response to the real environment. It is based on [Fiala et al.](https://link.springer.com/article/10.1007/s00484-011-0424-7) multi-node model of thermoregulation.
28-32: Fix heading level and quiet markdownlint for embed URL.Align with MD001 and suppress MD034 for the GitBook embed.
Apply this diff:
-### Video Tutorial +## Video Tutorial @@ -{% embed url="https://youtu.be/VJ_wOHadVdw?si=hIa-95u3wpeP6eq0&t=781" %} +<!-- markdownlint-disable-next-line MD034 --> +{% embed url="https://youtu.be/VJ_wOHadVdw?si=hIa-95u3wpeP6eq0&t=781" %}tests/python/test_utils.py (3)
29-35: Avoid writing test artifacts to an unscoped relative pathThe pickle file is created in the working directory, which can pollute the repo and cause flaky behavior across environments. Prefer a scoped temp dir or a tests data directory.
Example using a per-test temp file with pathlib:
-def import_epw_test(): - epw_test_file_path = "epw_test.pkl" +from pathlib import Path +def import_epw_test(tmp_dir: Path | None = None): + epw_test_file_path = Path(tmp_dir or Path(__file__).parent) / "epw_test.pkl" @@ - if not os.path.isfile(epw_test_file_path): - save_epw_test(path_file=epw_test_file_path) + if not epw_test_file_path.is_file(): + save_epw_test(path_file=str(epw_test_file_path)) - return pd.read_pickle(epw_test_file_path, compression="gzip") + return pd.read_pickle(str(epw_test_file_path), compression="gzip")If you’re using pytest, you can pass tmp_path to import_epw_test and keep the artifact out of the repo.
38-46: Make the network check robust and skip cleanly when offline
- Printing the Response object adds noise to test logs.
- Only catching ConnectionError misses other network failures (Timeout, HTTPError).
- Tests that depend on external services are flaky; mark them as skipped when offline.
If you use pytest, consider:
+import pytest @@ - # check tha the climate.onebuilding website is on - print(requests.get("https://climate.onebuilding.org", timeout=2)) + # Quick online check; skip test if unreachable + requests.head("https://climate.onebuilding.org", timeout=3).raise_for_status() df = import_epw_test() data_table = summary_table_tmp_rh_tab(df, "RH", UnitSystem.SI) assert data_table.data[0]["month"] == "Jan" - except requests.exceptions.ConnectionError: - pass + except requests.exceptions.RequestException as e: + pytest.skip(f"Skipping: offline or site unreachable ({e})")If sticking to unittest-style, replace pytest.skip(...) with a simple return and log a reason.
42-45: Assertion depends on implicit month ordering; guard against locale/ordering driftThe test assumes the first row is Jan. If month label logic or ordering changes, this becomes brittle.
Consider asserting presence and count instead:
months = [row["month"] for row in data_table.data[:-1]] # exclude 'Year' assert months[0] == "Jan" and months[-1] == "Dec" and len(months) == 12pages/lib/import_one_building_files.py (1)
79-85: File path management and atomic writesHard-coding "./assets/data/one_building.csv" couples to CWD and risks partial writes. Consider using pathlib with a resolved path and atomic write to avoid corruption on failures.
- df_old = pd.read_csv("./assets/data/one_building.csv", compression="gzip") + from pathlib import Path + data_path = Path(__file__).resolve().parents[2] / "assets" / "data" / "one_building.csv" + df_old = pd.read_csv(data_path, compression="gzip") df_old = pd.concat([df_old, df]).drop_duplicates() - df_old.to_csv("./assets/data/one_building.csv", index=False, compression="gzip") + tmp = data_path.with_suffix(".csv.tmp") + df_old.to_csv(tmp, index=False, compression="gzip") + tmp.replace(data_path) except FileNotFoundError: - df.to_csv("./assets/data/one_building.csv", index=False, compression="gzip") + data_path.parent.mkdir(parents=True, exist_ok=True) + df.to_csv(data_path, index=False, compression="gzip")pages/lib/charts_sun.py (3)
165-165: Typos in comments: “annalemma” and “sostices”Minor, but these show up in code reading.
- # draw annalemma + # draw analemma @@ - # draw equinox and sostices + # draw equinoxes and solsticesRepeat the solstice comment fix in custom_cartesian_solar (Line 417).
Also applies to: 238-238
131-137: Remove unused times initializationThe times computed here are not used and are recomputed later in the equinox loop.
- tz = "UTC" - times = pd.date_range( - "2019-01-01 00:00:00", "2020-01-01", inclusive="left", freq="h", tz=tz - ) - delta = timedelta(days=0, hours=time_zone - 1, minutes=0) - times = times - delta - solpos = df.loc[df["apparent_elevation"] > 0, :] + tz = "UTC" + delta = timedelta(days=0, hours=time_zone - 1, minutes=0) + solpos = df.loc[df["apparent_elevation"] > 0, :]
418-443: DRY: duplicate equinox/solstice path code across two functionsThe equinox/solstice path generation is duplicated in polar_graph and custom_cartesian_solar. Consider a small helper to compute and add traces; it will simplify maintenance.
If helpful, I can draft a shared helper taking figure, latitude, longitude, tz, delta, and a “polar” flag to add the appropriate trace type. Do you want me to push a refactor?
docs/documentation/tabs-explained/sun-and-cloud/README.md (3)
3-6: Minor editorial cleanups and broken-reference check
- “cartesian” is correct; the bullet link now ends with a slash—verify it actually resolves.
-* [Apparent sunpath for the location (spherical and cartesian projection)](broken-reference/) +* [Apparent sunpath for the location (spherical and cartesian projection)](broken-reference/)If that target is indeed a placeholder, consider updating it or removing the link until live.
14-16: Spelling: “cartesian”Remove the trailing space fix is good; also ensure consistent capitalization/spelling across images and text.
- +
18-21: Wording: “overlaid” is the standard formMinor language polish.
-Clima optionally allows a variety of variables to be overlayed on either sun path type. +Clima optionally allows a variety of variables to be overlaid on either sun path type.pages/lib/extract_df.py (1)
24-24: Consider the performance impact of the decorator in production.The
@code_timerdecorator will print timing information for every function call. While useful for development and debugging, this could generate excessive console output in production environments.Consider making the timing decorator conditional based on the debug mode:
# In pages/lib/utils.py or a new decorators module def conditional_code_timer(func): """Apply code_timer only in debug mode""" if AppConfig.DEBUG: return code_timer(func) return funcThen use
@conditional_code_timerinstead of@code_timerfor these functions.Also applies to: 51-51, 77-77
pages/lib/global_scheme.py (1)
370-370: Consider documenting the color change rationale.The first color in the wind_speed list was changed from white (#ffffff) to light gray (#D3D3D3). While this likely improves visibility, it would be helpful to document why this visual change was made.
Consider adding a comment explaining the color choice:
"wind_speed": { "name": "Wind speed", "color": [ "#D3D3D3", # Light gray instead of white for better visibility "#b2f2ff", # ... rest of colorspages/lib/template_graphs.py (2)
85-86: Consider removing or conditionally enabling the @code_timer decorator in production.While timing decorators are useful for performance analysis during development, they can clutter production logs. Consider making this conditional based on an environment variable or debug flag.
238-239: Uncommented decorator creates confusion.The commented
@code_timerdecorator on Line 238 could confuse future maintainers. Either remove it completely or add a comment explaining why it's disabled for this specific function.Apply this diff:
-# @code_timer +# Note: @code_timer disabled here as daily_profile is called frequently and would clutter logspages/summary.py (2)
3-4: Import cleanup: Remove unusedPreventUpdateimportThe
PreventUpdateexception is imported but only used in the download functions. Since it's already imported fromdash.exceptions, the additional import is unnecessary.
188-197: Consider removing or uncommenting the setpoints callbackThere's a commented-out callback for updating setpoints based on unit system changes. This functionality seems useful and should either be implemented or removed entirely to avoid code clutter.
Consider either implementing this callback or removing it:
-# @callback( -# [Output('input-hdd-set-point', 'value'), Output('input-cdd-set-point', 'value')], -# [Input('si-ip-radio-input', 'value')] -# ) -# def update_setpoints(si_ip_unit_store_data): -# if si_ip_unit_store_data == 'si': -# return 10, 18 -# else: -# return 50, 64 -#pages/select.py (1)
233-241: Consider using a list comprehension for route outputsThe explicit listing of routes is clear but could be more maintainable using a data structure.
Consider defining the routes in a list for easier maintenance:
MAIN_ROUTES = [ PageUrls.SELECT.value, PageUrls.SUMMARY.value, PageUrls.T_RH.value, PageUrls.SUN.value, PageUrls.WIND.value, PageUrls.PSY_CHART.value, PageUrls.EXPLORER.value, PageUrls.OUTDOOR.value, PageUrls.NATURAL_VENTILATION.value, ] # Then in the callback decorator: @callback( [Output(route, "disabled") for route in MAIN_ROUTES] + [Output("banner-subtitle", "children")], ... )
| [bumpversion:file:pages/lib/layout.py] | ||
| search = Version: {current_version} | ||
| replace = Version: {new_version} |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Verify that the target file actually contains the search template
The search pattern "Version: {current_version}" must exist in pages/lib/layout.py or the bump will no-op/fail. Same for the manifest id pattern.
Run this to verify before cutting a release:
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Expect at least one match in each target
rg -n "^.*Version:\s*$(awk -F'= ' '/current_version/ {print $2}' .bumpversion.cfg)\b" pages/lib/layout.py -C0
rg -nP '"id":\s*"'$(awk -F'= ' '/current_version/ {print $2}' .bumpversion.cfg)'"' assets/manifest.json -C0Length of output: 521
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Extract the current version from .bumpversion.cfg
current_version=$(awk -F'= ' '/current_version/ {print $2}' .bumpversion.cfg)
echo "current_version is '$current_version'"
# Verify the layout.py contains the version template
echo "Searching for 'Version: $current_version' in pages/lib/layout.py"
rg -nF "Version: $current_version" pages/lib/layout.py || echo "No match in pages/lib/layout.py"
# Verify the manifest.json contains the id template
echo "Searching for '\"id\": \"$current_version\"' in assets/manifest.json"
rg -nF "\"id\": \"$current_version\"" assets/manifest.json || echo "No match in assets/manifest.json"Length of output: 1523
bumpversion search patterns will not match existing placeholders
The current bumpversion config expands {current_version} to 0.9.0, but neither target file contains that literal string—both still show the template placeholders. As a result, running the bump will no-op on these files.
• pages/lib/layout.py
– contains Version: {current_version}
– needs to contain Version: 0.9.0 (or whatever current_version is)
• assets/manifest.json
– contains "id": "{current_version}"
– needs to contain "id": "0.9.0"
Please update those files to include the actual current version string (or adjust your search patterns accordingly) before cutting a release.
🤖 Prompt for AI Agents
.bumpversion.cfg lines 6-8: the bumpversion search patterns won't match because
target files still contain template placeholders; either replace the
placeholders in pages/lib/layout.py and assets/manifest.json with the current
version string (e.g., change 'Version: {current_version}' to 'Version: 0.9.0' in
pages/lib/layout.py and '"id": "{current_version}"' to '"id": "0.9.0"' in
assets/manifest.json), or alter the .bumpversion.cfg search/replace to match the
placeholder form (e.g., search for 'Version: {current_version}' / '"id":
"{current_version}"' and replace with the new_version format); implement one of
these fixes and commit the updated files.
| cy.get('.nav-item') | ||
| .contains(name) | ||
| .click(); |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Broadened selector risks clicking disabled/unintended nav items; tighten it.
Switching from a disabled-aware selector to .nav-item may target hidden/disabled items or multiple matches. Filter for visibility and non-disabled state to reduce flakiness.
Apply this diff to make the click more robust without relying on new libs:
-function click_tab(name) {
- cy.get('.nav-item')
- .contains(name)
- .click();
-}
+function click_tab(name) {
+ // Click the first visible, enabled nav item that matches the label
+ cy.contains('.nav-item', name)
+ .filter(':visible')
+ .filter(':not([aria-disabled="true"])')
+ .first()
+ .click();
+}If feasible, adding data attributes in the app (e.g., data-testid="nav-item") will make this even more stable: cy.contains('[data-testid="nav-item"]', name)....
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| cy.get('.nav-item') | |
| .contains(name) | |
| .click(); | |
| function click_tab(name) { | |
| // Click the first visible, enabled nav item that matches the label | |
| cy.contains('.nav-item', name) | |
| .filter(':visible') | |
| .filter(':not([aria-disabled="true"])') | |
| .first() | |
| .click(); | |
| } |
🤖 Prompt for AI Agents
In tests/node/cypress/e2e/spec.cy.js around lines 2 to 4, the selector
cy.get('.nav-item').contains(name).click() is too broad and may click hidden or
disabled items; narrow the selection to visible, enabled nav items before
clicking. Replace with a chain that scopes contains to visible/non-disabled
elements, e.g. use cy.contains('.nav-item:visible:not([disabled])',
name).click() or
cy.get('.nav-item').contains(name).filter(':visible').not('[disabled]').click();
if possible prefer using a stable attribute (e.g., data-testid="nav-item") and
target cy.contains('[data-testid="nav-item"]:visible:not([disabled])',
name).click() for maximum robustness.
Refine text in various markdown files to improve readability and remove unnecessary formatting characters. Ensure consistency in presentation across the documentation.
Adjust the order of climate data constants to ensure proper display sequence in the application.
Change Ruff check command to output format compatible with GitHub and modify the format command to verify changes instead of applying them.
Refine exception handling to specifically catch ValueError, IndexError, and KeyError, providing clearer error messages. Also, adjust latitude calculation to ensure consistent geographical positioning.
Ensure that the percentage of diffuse horizontal solar radiation is calculated correctly by checking for a zero denominator, preventing division errors and ensuring accurate reporting.
Specify exception types to catch in order to prevent unhandled exceptions and ensure smoother graph rendering.
Ensure proper formatting by adding a newline at the end of the file to comply with YAML standards.
Update the text in the user survey prompt to improve clarity and correct grammatical errors.
Eliminate the code that modifies sys.path to improve clarity and maintainability of the test utilities. This change does not affect functionality.
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
docs/documentation/tabs-explained/tab-summary/clima-dataframe.md (1)
28-32: Missing Computations for New Variables & Title-Case ConsistencyAnalysis of the codebase reveals that while the documentation lists five metrics, only three are actually implemented. Please address the following critical issues:
- Missing DataFrame/UI support
• “Absolute Humidity” and “Wet-Bulb Temperature” do not appear anywhere in the data-processing or UI code (e.g., pages/summary.py, pages/lib, psy-chart.py). Either implement these computations and expose them in the DataFrame/UI overlays, or remove them from the docs.- Title-Case alignment in docs
• docs/documentation/tabs-explained/tab-summary/clima-dataframe.md (lines 28–32):
– Change “[Absolute humidity]” → “[Absolute Humidity]”
– Change “[Wet-bulb temperature]” → “[Wet-Bulb Temperature]”
– Change “[Saturation pressure]” → “[Saturation Pressure]”- (Optional) Unit-System note
• Consider adding a one-line notice near the top of clima-dataframe.md indicating that displayed units follow the selected Unit System (SI/IP).pages/select.py (2)
162-166: Harden upload path: guard empty lists and fix user message; log instead of print.
- If Upload.contents is an empty list, Line 166 will raise IndexError. Guard for truthy lists and filenames.
- Parsing errors are currently reported as “wrong extension,” which is misleading. Use “invalid_format.”
- Replace print with logging to avoid stdout noise in production.
- elif ( - ctx.triggered[0]["prop_id"] == "upload-data.contents" - and list_of_contents is not None - ): + elif ( + ctx.triggered[0]["prop_id"] == "upload-data.contents" + and list_of_contents + and list_of_names + ): content_type, content_string = list_of_contents[0].split(",") @@ - except (ValueError, IndexError, KeyError) as e: - print(f"Error parsing EPW file: {e}") + except (ValueError, IndexError, KeyError) as e: + import logging + logging.exception("Error parsing EPW file") return ( None, None, True, - messages_alert["wrong_extension"], + messages_alert["invalid_format"], "warning", )Optional: improve decoding robustness (handles UTF-8 with BOM first):
- try: - decoded_string = decoded_bytes.decode("utf-8") - except UnicodeDecodeError: - decoded_string = decoded_bytes.decode("latin-1") + for enc in ("utf-8-sig", "utf-8", "latin-1"): + try: + decoded_string = decoded_bytes.decode(enc) + break + except UnicodeDecodeError: + continueAlso applies to: 193-201
296-299: Click-to-URL extraction will break: customdata is never set and regex is brittle. Use Plotly custom_data with the raw URL.Right now ‘customdata’ will be None because the figures only set hover_data. Even if present, parsing HTML with regex is risky. Set custom_data=["Source"] in both traces and read it directly.
Apply together with the figure changes below:
- if click_map: - url = re.search( - r'href=[\'"]?([^\'" >]+)', click_map["points"][0]["customdata"][-1] - ).group(1) - return not is_open, url + if click_map: + point = click_map["points"][0] + custom = point.get("customdata") + if not custom: + raise PreventUpdate + url = custom[0] + return not is_open, url
♻️ Duplicate comments (6)
docs/documentation/tabs-explained/psychrometric-chart/README.md (1)
3-3: Terminology corrected: “psychrometric” ✅Good fix aligning with the page title and domain terminology.
docs/README.md (1)
13-22: Video section structure and linter-friendly fallback: LGTMHeading level is correct after H1, and adding the explicit “Watch on YouTube” link addresses MD034 while keeping the embed.
cloudbuild.yaml (1)
11-15: EOF newline missing; fix to satisfy yamllint and avoid CI failures.The file lacks a newline at EOF. Add a trailing newline so linters and some tooling don’t error.
Apply:
'--cpu', '$_CPU'] +pages/lib/template_graphs.py (1)
853-853: Improved error handling with specific exceptions.The change from bare
exceptto specific exception types (KeyError,IndexError,TypeError) improves debugging and follows best practices.pages/summary.py (1)
264-271: Division by zero protection added.Good implementation of the safety check to prevent division by zero when calculating diffuse radiation percentage.
pages/select.py (1)
333-333: Duplicate latitude adjustment has been removed.Past comment flagged a double increment on latitude; only a single offset remains. LGTM.
🧹 Nitpick comments (32)
docs/documentation/tabs-explained/sun-and-cloud/global-and-diffuse-horizontal-solar-radiation/README.md (1)
14-14: Polish grammar and link punctuation on the updated lineMove the period outside the link and drop the unnecessary comma after the unit parenthesis for cleaner prose.
Apply this diff:
-The integral of the curves in the graphs is the total energy (in Wh/m²), supplied by the sun. Be careful in considering the [different types of solar radiation.](global-diffuse-and-normal-solar-radiation-explained.md) +The integral of the curves in the graphs is the total energy (in Wh/m²) supplied by the sun. Be careful to consider the [different types of solar radiation](global-diffuse-and-normal-solar-radiation-explained.md).docs/documentation/tabs-explained/tab-summary/clima-dataframe.md (2)
3-3: Polish the intro sentence for grammar and clarity ("other we calculate" → "variables we compute"; DataFrame casing).Tighten the prose and align casing with pandas terminology.
-**Clima** calculates new variables and creates a new **dataframe** containing the variables already inside the original EPW files and other we calculate. Users can overlay all the variables on the [sun path](../sun-and-cloud/), on the [psychrometric chart](../psychrometric-chart/), and on the customizable graphs in the [data explorer](../data-explorer.md). +**Clima** calculates additional variables and creates a new **DataFrame** that includes both the original EPW fields and the variables we compute. Users can overlay any variable on the [sun path](../sun-and-cloud/), the [psychrometric chart](../psychrometric-chart/), and the customizable graphs in the [data explorer](../data-explorer.md).
16-24: Good normalization of bullet formatting; fix lingering MD039 issues (spaces inside link text).The reformat looks consistent. Remove the stray spaces inside link text on Lines 19–20 flagged by markdownlint.
-* [Diffuse Horizontal Illuminance ](https://bigladdersoftware.com/epx/docs/22-2/auxiliary-programs/energyplus-weather-file-epw-data-dictionary.html#field-diffuse-horizontal-illuminance) -* [Zenith Luminance ](https://bigladdersoftware.com/epx/docs/22-2/auxiliary-programs/energyplus-weather-file-epw-data-dictionary.html#field-zenith-luminance) +* [Diffuse Horizontal Illuminance](https://bigladdersoftware.com/epx/docs/22-2/auxiliary-programs/energyplus-weather-file-epw-data-dictionary.html#field-diffuse-horizontal-illuminance) +* [Zenith Luminance](https://bigladdersoftware.com/epx/docs/22-2/auxiliary-programs/energyplus-weather-file-epw-data-dictionary.html#field-zenith-luminance)docs/documentation/tabs-explained/psychrometric-chart/psychrometric-chart-explained.md (2)
5-5: Clarity: prefer “x-axis/y-axis” over “abscissae/ordinates”.These terms are correct but less familiar to many readers. Using “x‑axis/y‑axis” improves readability without losing precision.
-The **Clima** psychrometric diagram shows dry bulb temperature on the abscissae, specific humidity on the ordinates, and relative humidity as parametric curves inside the graph. +The **Clima** psychrometric diagram shows dry-bulb temperature on the x-axis, specific humidity on the y-axis, and relative humidity as parametric curves inside the graph.
17-17: Grammar/style: tighten phrasing and fix plurality.Minor tweaks for subject–verb agreement and smoother wording.
-The simplest transformation to be analyzed on the psychrometric diagram is the heating and cooling processes. The transition from the starting condition (1) to the final one (2) occurs horizontally, at constant humidity ratio values. The final condition (2) can be inspected as a function of the starting one. +The simplest transformations to analyze on the psychrometric diagram are heating and cooling. The transition from the starting condition (1) to the final one (2) is horizontal at a constant humidity ratio. The final condition (2) can be evaluated based on the starting condition (1).docs/documentation/tabs-explained/README.md (1)
11-13: Fix typos and improve flow.Correct “thy” → “they” and smooth the following sentence.
-Although there is a logical sequence in the organization of the tabs, thy can be accessed in any order. +Although there is a logical sequence in the organization of the tabs, they can be accessed in any order. - -The Followin section will explain the content and the usage of each tab. +The following section explains the content and usage of each tab.docs/documentation/tabs-explained/outdoor-comfort/utci-explained.md (3)
3-3: Minor style: simplify “presence or absence”.-The UTCI tab allows users to analyze outdoor thermal comfort for a combination of different meteorological conditions based on the presence or absence of sun and wind. +The UTCI tab lets users analyze outdoor thermal comfort under different meteorological conditions, with or without sun and wind.
12-12: Concise wording.-The UTCI can then be visualized for the entire year for the scenario chosen. +The UTCI can then be visualized for the entire year for the chosen scenario.
20-20: Grammar/style: make active and precise.-The UTCI is a useful tool to design the outdoor space, to maximize the number of comfortable hours. The designer can influence two factors out of the four driving outdoor comfort: radiant temperature (i.e. exposure to the sun) and wind speed (i.e. exposure to the wind). +UTCI is a useful tool for designing outdoor spaces and maximizing comfortable hours. Designers can directly influence two of the four drivers of outdoor comfort: radiant temperature (sun exposure) and wind speed (wind exposure).docs/documentation/tabs-explained/psychrometric-chart/README.md (2)
9-9: Clarify wording about the “None” option and frequency visualization.Shorter sentences and clearer explanation of opacity-as-frequency.
-With the first choice in the drop-down list, "None", it is possible to view temperature conditions in the psychrometric diagram over the entire year. The visualized dots have the same gradient with a transparency rate, they are not colored according to a legend. Multiplying them when overlaid provides a visualization of their frequency, so the most common conditions. +With the first option in the drop-down list, "None," you can view temperature conditions on the psychrometric diagram for the entire year. All points share the same color with transparency (no legend). Overlapping points increase opacity, visually indicating frequency—the most common conditions appear darker.
21-26: Heading level: bump to H2 to satisfy MD001; keep embed as-is.markdownlint expects heading levels to increment by one. The embed URL is fine inside the Liquid macro; MD034 can be ignored here.
-### Video Tutorial +## Video Tutorialdocs/documentation/tabs-explained/natural-ventilation.md (1)
29-35: Heading level: bump to H2 to satisfy MD001; embed is OK.Keep the Liquid embed; only adjust the heading level for consistent hierarchy.
-### Video Tutorial +## Video Tutorialdocs/documentation/tabs-explained/tab-summary/climate-profiles-explained.md (1)
5-7: Subject–verb agreement and minor copy edit for clarityUse singular verb with “graph” and tighten wording; also remove the extra space before the next sentence.
-The Climate Profiles graph are [Violin Plots](https://en.wikipedia.org/wiki/Violin_plot). -They show the [probability density](https://en.wikipedia.org/wiki/Probability_density_function) of the data at different values, usually smoothed by a [kernel density estimator](https://en.wikipedia.org/wiki/Kernel_density_estimator). +The Climate Profiles graph is composed of [violin plots](https://en.wikipedia.org/wiki/Violin_plot). +They show the [probability density](https://en.wikipedia.org/wiki/Probability_density_function) of the data at different values, usually smoothed by a [kernel density estimator](https://en.wikipedia.org/wiki/Kernel_density_estimator). Wider sections of the violin plot represent a higher probability that members of the population will take on the given value; the skinnier sections represent a lower probability.docs/README.md (1)
18-18: Title-case the caption for consistency with product nameMinor consistency tweak: “CBE Clima Tool” is capitalized elsewhere.
-CBE Clima tool tutorial and overview +CBE Clima Tool tutorial and overviewdocs/documentation/tabs-explained/tab-home.md (2)
9-15: Fix parallelism and align to US English (“analyze … upload …”)Tighten the sentence and keep wording consistent with the rest of the docs (“analyze”).
-## Video Tutorial - -Learn more about how to analyse the climate of a specific location and uploading your custom EPW file by watching the following video. +## Video Tutorial + +Learn how to analyze the climate for a specific location and upload your own EPW file in the following video.
13-15: Add a non-embed fallback link to satisfy MD034Like the main README, provide a standard Markdown link immediately after the embed so markdownlint has a non-bare link to parse.
{% embed url="https://youtu.be/VJ_wOHadVdw?si=SxvUzaI9rCNIFFs0&t=136" %} How to select or upload an EPW file {% endembed %} + +[Watch on YouTube](https://youtu.be/VJ_wOHadVdw?si=SxvUzaI9rCNIFFs0&t=136)docs/documentation/tabs-explained/sun-and-cloud/README.md (2)
14-16: Capitalize “Cartesian” to match conventional usageMinor terminology consistency.
- +
24-29: Add linter-friendly fallback link (or suppress MD034) for embedTo keep markdownlint happy without changing how GitBook renders embeds, add a standard link below the embed (or wrap the embed with MD034 disable/enable comments).
Option A — add fallback link:
{% embed url="https://youtu.be/VJ_wOHadVdw?si=mB2xNH57MWW_4CRR&t=447" %} + +[Watch on YouTube](https://youtu.be/VJ_wOHadVdw?si=mB2xNH57MWW_4CRR&t=447)Option B — inline suppression:
+<!-- markdownlint-disable MD034 --> {% embed url="https://youtu.be/VJ_wOHadVdw?si=mB2xNH57MWW_4CRR&t=447" %} +<!-- markdownlint-enable MD034 -->cloudbuild.yaml (3)
3-7: Use immutable image tags to make deployments reproducible.Building and pushing without a content-addressable reference (e.g., SHORT_SHA tag) makes the deploy step depend on a moving “latest”-like tag. Tag with $SHORT_SHA and deploy that tag to ensure deterministic rollouts and traceability.
Apply:
- args: ['build', '-t', '$_GCR/$_PROJ_NAME/$_REPO_NAME/$_IMG_NAME', '.'] + args: ['build', + '-t', '$_GCR/$_PROJ_NAME/$_REPO_NAME/$_IMG_NAME:$SHORT_SHA', + '-t', '$_GCR/$_PROJ_NAME/$_REPO_NAME/$_IMG_NAME:latest', + '.'] ... - args: ['push', '$_GCR/$_PROJ_NAME/$_REPO_NAME/$_IMG_NAME'] + args: ['push', '$_GCR/$_PROJ_NAME/$_REPO_NAME/$_IMG_NAME:$SHORT_SHA'] + +- name: 'gcr.io/cloud-builders/docker' + args: ['push', '$_GCR/$_PROJ_NAME/$_REPO_NAME/$_IMG_NAME:latest']
9-15: Prefer the dedicated gcloud builder image.Use gcr.io/cloud-builders/gcloud instead of the full cloudsdk image; it’s smaller and purpose-built for Cloud Build.
Apply:
-- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk' - entrypoint: 'gcloud' - args: ['run', 'deploy', '$_REPO_NAME', +- name: 'gcr.io/cloud-builders/gcloud' + args: ['run', 'deploy', '$_REPO_NAME', '--image', '$_GCR/$_PROJ_NAME/$_REPO_NAME/$_IMG_NAME:$SHORT_SHA', '--region', '$_REGION', '--memory', '$_MEMORY', '--cpu', '$_CPU']
1-15: Consider migrating from Container Registry to Artifact Registry.Container Registry (gcr.io/us.gcr.io) is deprecated in favor of Artifact Registry. Moving now avoids future migration work and improves security/controls. This affects the repository URL and substitutions.
Example adjustments (illustrative):
- args: ['build', - '-t', '$_GCR/$_PROJ_NAME/$_REPO_NAME/$_IMG_NAME:$SHORT_SHA', - '-t', '$_GCR/$_PROJ_NAME/$_REPO_NAME/$_IMG_NAME:latest', + args: ['build', + '-t', '$_AR/$_PROJ_NAME/$_REPO_NAME/$_IMG_NAME:$SHORT_SHA', + '-t', '$_AR/$_PROJ_NAME/$_REPO_NAME/$_IMG_NAME:latest', '.'] ... - args: ['push', '$_GCR/$_PROJ_NAME/$_REPO_NAME/$_IMG_NAME:$SHORT_SHA'] + args: ['push', '$_AR/$_PROJ_NAME/$_REPO_NAME/$_IMG_NAME:$SHORT_SHA'] ... - '--image', '$_GCR/$_PROJ_NAME/$_REPO_NAME/$_IMG_NAME:$SHORT_SHA', + '--image', '$_AR/$_PROJ_NAME/$_REPO_NAME/$_IMG_NAME:$SHORT_SHA',Where $_AR would look like: us-central1-docker.pkg.dev
docs/contributing/run-project-locally.md (3)
93-96: Verify the requirements generation command; consider the more portable lock export.pipenv requirements works on newer Pipenv, but many environments still use pipenv lock -r. Recommend the lock-based export to reduce breakage and optionally producing a dev requirements file.
Apply:
-gcloud components update --quiet -pipenv requirements > requirements.txt +gcloud components update --quiet +# Export pinned production requirements from Pipfile.lock +pipenv lock -r > requirements.txt +# (Optional) Export dev requirements separately if your Dockerfile uses them +# pipenv lock -r --dev > requirements-dev.txt
100-108: Avoid hard-coded GCP project/values; align with CI and reduce copy-paste errors.The commands hard-code clima-316917 and specific substitutions. Parameterize to match your Cloud Build/GitHub workflow and prevent accidental deploys to the wrong project.
Apply:
- gcloud builds submit --project=clima-316917 --substitutions=_REPO_NAME="clima-test",_PROJ_NAME="clima-316917",_IMG_NAME="test",_GCR="us.gcr.io",_REGION="us-central1",_MEMORY="4Gi",_CPU="2" + PROJECT_ID="${PROJECT_ID:-clima-316917}" + REGION="${REGION:-us-central1}" + GCR="${GCR:-us.gcr.io}" + gcloud builds submit --project="${PROJECT_ID}" \ + --substitutions=_REPO_NAME="clima-test",_PROJ_NAME="${PROJECT_ID}",_IMG_NAME="test",_GCR="${GCR}",_REGION="${REGION}",_MEMORY="4Gi",_CPU="2"Repeat similarly for the “main” deploy block.
73-80: Confirm the version bump tool used (bumpversion vs bump2version).Many projects use bump2version. Ensure the tool in your Pipfile/Makefile matches the command here; if it’s bump2version, update the docs and suggest invoking via pipenv.
Proposed tweak:
-bumpversion patch # alternatively you can use minor or major instead of patch +# If using bump2version (most common): +pipenv run bump2version patch # or minor/major.github/workflows/deploy.yml (1)
27-32: Surface deploy parameters as workflow env vars for clarity and DRYness.Defining substitutions via env reduces duplication and makes future changes safer.
Apply:
jobs: deploy: name: Deploying 🚀 Clima runs-on: ubuntu-latest + env: + REPO_NAME: clima + PROJ_NAME: clima-316917 + IMG_NAME: main + GCR: us.gcr.io + REGION: us-central1 + MEMORY: 4Gi + CPU: "2" ... - name: Building (🏗️) and Deploying (🚀) run: | gcloud builds submit \ - --project=clima-316917 \ - --substitutions=_REPO_NAME="clima",_PROJ_NAME="clima-316917",_IMG_NAME="main",_GCR="us.gcr.io",_REGION="us-central1",_MEMORY="4Gi",_CPU="2" + --project="${PROJ_NAME}" \ + --substitutions=_REPO_NAME="${REPO_NAME}",_PROJ_NAME="${PROJ_NAME}",_IMG_NAME="${IMG_NAME}",_GCR="${GCR}",_REGION="${REGION}",_MEMORY="${MEMORY}",_CPU="${CPU}"pages/lib/template_graphs.py (2)
238-238: Consider enabling the timing decorator for consistency.The
daily_profilefunction has a commented-out@code_timerdecorator. For consistency withyearly_profileand to monitor performance uniformly, consider enabling it.-# @code_timer +@code_timer def daily_profile(df, var, global_local, si_ip):
320-320: Enable timing decorator for heatmap_with_filter.Similar to
daily_profile, this function has a commented timing decorator. Consider enabling it for consistent performance monitoring.-# @code_timer +@code_timer def heatmap_with_filter(pages/select.py (5)
65-70: Skeleton is never shown (visible=False); consider removing or wiring visibility.As-is, the Skeleton wrapper is inert and adds DOM depth without value. Either remove it and use a plain container, or control its visibility from the map-loading callback.
Apply one of the following:
Option A — remove Skeleton wrapper:
- dmc.Skeleton( - visible=False, - id="skeleton-graph-container", - height=500, - children=html.Div(id="tab-one-map"), - ), + html.Div(id="skeleton-graph-container"),Option B — keep Skeleton but show it until the graph is injected:
- dmc.Skeleton( - visible=False, + dmc.Skeleton( + visible=True, id="skeleton-graph-container", height=500, children=html.Div(id="tab-one-map"), ),
217-223: UnitSystem comparison: compare to the enum’s value to avoid type drift.If the radio input provides a string ("ip"), direct comparison to UnitSystem.IP (an Enum) may fail in some setups. Compare to UnitSystem.IP.value. Also, the map_json variable is only used to call convert_data; the returned value is ignored.
- map_json = json.dumps(mapping_dictionary) - if si_ip_input == UnitSystem.IP: - map_json = convert_data(df, map_json) + mapping_json = json.dumps(mapping_dictionary) + if si_ip_input == UnitSystem.IP.value: + convert_data(df, mapping_json) # modifies df in-placeIf other pages expect si-ip-unit-store to hold the raw enum instead of a string, confirm the radio input values. If they are strings, aligning on strings across the app will reduce conversion churn.
249-277: Guard meta access in subtitle to avoid KeyError when data is present but meta is not.Rare, but possible on partial load or error recovery. Use .get with defaults.
- "Current Location: " + meta["city"] + ", " + meta["country"], + f"Current Location: {meta.get('city', 'N/A')}, {meta.get('country', '')}",
328-331: Add basic I/O error handling and consider caching to avoid reading assets on every nav.Wrap file reads and json load to prevent hard failures in production, and cache results (e.g., lru_cache or Flask-Caching) to reduce I/O.
- with open("./assets/data/epw_location.json", encoding="utf8") as data_file: - data = json.load(data_file) + try: + with open("./assets/data/epw_location.json", encoding="utf8") as data_file: + data = json.load(data_file) + except (FileNotFoundError, json.JSONDecodeError): + raise PreventUpdateExample simple caching outside this function (add near module top):
from functools import lru_cache @lru_cache(maxsize=1) def _load_epw_locations(): with open("./assets/data/epw_location.json", encoding="utf8") as f: return json.load(f)Then use: data = _load_epw_locations()
312-316: Minor copy nit: “Analyse” vs “Analyze.”If following U.S. English elsewhere, consider “Analyze.” Otherwise ignore.
- return [f"Analyse data from {click_map['points'][0]['hovertext']}?"] + return [f"Analyze data from {click_map['points'][0]['hovertext']}?"]
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (28)
.github/workflows/deploy.yml(1 hunks).github/workflows/python.yml(2 hunks)cloudbuild.yaml(1 hunks)config.py(1 hunks)docs/README.md(1 hunks)docs/contributing/contributing.md(1 hunks)docs/contributing/run-project-locally.md(2 hunks)docs/documentation/tabs-explained/README.md(1 hunks)docs/documentation/tabs-explained/natural-ventilation.md(1 hunks)docs/documentation/tabs-explained/outdoor-comfort/utci-explained.md(2 hunks)docs/documentation/tabs-explained/psychrometric-chart/README.md(2 hunks)docs/documentation/tabs-explained/psychrometric-chart/psychrometric-chart-explained.md(2 hunks)docs/documentation/tabs-explained/sun-and-cloud/README.md(2 hunks)docs/documentation/tabs-explained/sun-and-cloud/cloud-coverage.md(1 hunks)docs/documentation/tabs-explained/sun-and-cloud/customizable-daily-and-hourly-maps.md(1 hunks)docs/documentation/tabs-explained/sun-and-cloud/global-and-diffuse-horizontal-solar-radiation/README.md(1 hunks)docs/documentation/tabs-explained/tab-home.md(1 hunks)docs/documentation/tabs-explained/tab-summary/README.md(1 hunks)docs/documentation/tabs-explained/tab-summary/clima-dataframe.md(2 hunks)docs/documentation/tabs-explained/tab-summary/climate-profiles-explained.md(1 hunks)docs/documentation/tabs-explained/tab-summary/degree-days-explained.md(1 hunks)docs/documentation/tabs-explained/wind/how-to-read-a-wind-rose.md(1 hunks)pages/lib/layout.py(1 hunks)pages/lib/template_graphs.py(12 hunks)pages/not_found_404.py(1 hunks)pages/select.py(14 hunks)pages/summary.py(14 hunks)tests/python/test_utils.py(2 hunks)
✅ Files skipped from review due to trivial changes (4)
- docs/documentation/tabs-explained/sun-and-cloud/cloud-coverage.md
- docs/documentation/tabs-explained/wind/how-to-read-a-wind-rose.md
- docs/documentation/tabs-explained/tab-summary/degree-days-explained.md
- docs/documentation/tabs-explained/sun-and-cloud/customizable-daily-and-hourly-maps.md
🚧 Files skipped from review as they are similar to previous changes (7)
- config.py
- pages/lib/layout.py
- docs/contributing/contributing.md
- .github/workflows/python.yml
- tests/python/test_utils.py
- pages/not_found_404.py
- docs/documentation/tabs-explained/tab-summary/README.md
🧰 Additional context used
🧬 Code Graph Analysis (3)
pages/summary.py (5)
config.py (4)
PageUrls(23-39)DocLinks(82-96)PageInfo(54-79)UnitSystem(16-20)pages/lib/charts_summary.py (1)
world_map(5-36)pages/lib/extract_df.py (1)
get_data(25-48)pages/lib/template_graphs.py (1)
violin(16-82)pages/lib/utils.py (2)
title_with_tooltip(142-174)title_with_link(177-213)
pages/lib/template_graphs.py (2)
config.py (1)
UnitSystem(16-20)pages/lib/utils.py (2)
code_timer(13-25)determine_month_and_hour_filter(260-268)
pages/select.py (3)
pages/lib/extract_df.py (4)
convert_data(373-387)create_df(78-337)get_data(25-48)get_location_info(52-74)config.py (3)
PageUrls(23-39)PageInfo(54-79)UnitSystem(16-20)pages/lib/utils.py (1)
generate_chart_name(28-40)
🪛 LanguageTool
docs/documentation/tabs-explained/tab-summary/climate-profiles-explained.md
[grammar] ~6-~6: There might be a mistake here.
Context: ...a.org/wiki/Kernel_density_estimator). Wider sections of the violin plot repres...
(QB_NEW_EN)
docs/README.md
[grammar] ~17-~17: There might be a mistake here.
Context: .../www.youtube.com/watch?v=VJ_wOHadVdw" %} CBE Clima tool tutorial and overview {% ...
(QB_NEW_EN)
[grammar] ~18-~18: There might be a mistake here.
Context: ... %} CBE Clima tool tutorial and overview {% endembed %} [Watch on YouTube](https...
(QB_NEW_EN)
docs/documentation/tabs-explained/tab-summary/clima-dataframe.md
[grammar] ~16-~16: There might be a mistake here.
Context: ...diation) * Diffuse Horizontal Radiation * [Global Horizontal Illuminance](https://b...
(QB_NEW_EN)
[grammar] ~17-~17: There might be a mistake here.
Context: ...iation) * Global Horizontal Illuminance * [Direct Normal Illuminance](https://bigla...
(QB_NEW_EN)
[grammar] ~18-~18: There might be a mistake here.
Context: ...lluminance) * Direct Normal Illuminance * [Diffuse Horizontal Illuminance ](https:/...
(QB_NEW_EN)
[grammar] ~19-~19: There might be a mistake here.
Context: ...ance) * Diffuse Horizontal Illuminance * [Zenith Luminance ](https://bigladdersoft...
(QB_NEW_EN)
[grammar] ~20-~20: There might be a mistake here.
Context: ...zontal-illuminance) * Zenith Luminance * [Wind Direction](https://bigladdersoftwar...
(QB_NEW_EN)
[grammar] ~21-~21: There might be a mistake here.
Context: ...ield-zenith-luminance) * Wind Direction * [Wind Speed](https://bigladdersoftware.co...
(QB_NEW_EN)
[grammar] ~22-~22: There might be a mistake here.
Context: ...html#field-wind-direction) * Wind Speed * [Total Sky Cover](https://bigladdersoftwa...
(QB_NEW_EN)
[grammar] ~23-~23: There might be a mistake here.
Context: ...tml#field-wind-speed) * Total Sky Cover * [Opaque Sky Cover](https://bigladdersoftw...
(QB_NEW_EN)
[grammar] ~24-~24: There might be a mistake here.
Context: ...eld-total-sky-cover) * Opaque Sky Cover * [Visibility](https://bigladdersoftware.co...
(QB_NEW_EN)
docs/documentation/tabs-explained/README.md
[grammar] ~11-~11: Ensure spelling is correct
Context: ...quence in the organization of the tabs, thy can be accessed in any order. The Foll...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
docs/documentation/tabs-explained/outdoor-comfort/utci-explained.md
[grammar] ~20-~20: There might be a mistake here.
Context: ... wind speed (i.e. exposure to the wind).
(QB_NEW_EN)
docs/documentation/tabs-explained/sun-and-cloud/README.md
[grammar] ~5-~5: There might be a mistake here.
Context: ...ion (spherical and cartesian projection)](broken-reference/) * [Global and Diffuse Horizontal Solar Radi...
(QB_NEW_EN)
[style] ~18-~18: Consider using a more concise synonym.
Context: ..._sun_tab.svg) Clima optionally allows a variety of variables to be overlayed on either sun...
(A_VARIETY_OF)
docs/documentation/tabs-explained/sun-and-cloud/global-and-diffuse-horizontal-solar-radiation/README.md
[grammar] ~14-~14: There might be a mistake here.
Context: ...the different types of solar radiation.
(QB_NEW_EN)
docs/documentation/tabs-explained/tab-home.md
[grammar] ~13-~13: There might be a mistake here.
Context: ...J_wOHadVdw?si=SxvUzaI9rCNIFFs0&t=136" %} How to select or upload an EPW file {% e...
(QB_NEW_EN)
[grammar] ~14-~14: There might be a mistake here.
Context: ..." %} How to select or upload an EPW file {% endembed %}
(QB_NEW_EN)
🪛 markdownlint-cli2 (0.17.2)
docs/documentation/tabs-explained/tab-summary/clima-dataframe.md
19-19: Spaces inside link text
(MD039, no-space-in-links)
20-20: Spaces inside link text
(MD039, no-space-in-links)
docs/documentation/tabs-explained/natural-ventilation.md
29-29: Heading levels should only increment by one level at a time
Expected: h2; Actual: h3
(MD001, heading-increment)
33-33: Bare URL used
(MD034, no-bare-urls)
docs/documentation/tabs-explained/psychrometric-chart/README.md
21-21: Heading levels should only increment by one level at a time
Expected: h2; Actual: h3
(MD001, heading-increment)
25-25: Bare URL used
(MD034, no-bare-urls)
docs/documentation/tabs-explained/sun-and-cloud/README.md
28-28: Bare URL used
(MD034, no-bare-urls)
docs/documentation/tabs-explained/tab-home.md
13-13: Bare URL used
(MD034, no-bare-urls)
🪛 YAMLlint (1.37.1)
cloudbuild.yaml
[error] 15-15: no new line character at the end of file
(new-line-at-end-of-file)
🪛 actionlint (1.7.7)
.github/workflows/deploy.yml
23-23: the runner of "google-github-actions/setup-gcloud@v1" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: cypress
🔇 Additional comments (21)
docs/documentation/tabs-explained/sun-and-cloud/global-and-diffuse-horizontal-solar-radiation/README.md (1)
14-14: Whitespace cleanup: LGTMRemoving the stray HTML whitespace entity before the sentence improves formatting with no semantic change.
docs/README.md (1)
9-9: Citation block reflow looks goodBlock-quoting the citation is clearer and keeps the main flow focused. ✅
docs/contributing/run-project-locally.md (1)
30-31: Good call using dev dependencies via pipenv.This matches the repo’s pre-commit/ruff/pytest workflow. No issues.
pages/lib/template_graphs.py (9)
1-1: LGTM! Imports reorganized appropriately.The math imports are now at the top level and the module imports have been updated to use the new config-based structure.
85-85: Good addition of performance monitoring.The
@code_timerdecorator will help track the performance of theyearly_profilefunction, which can be computationally intensive.
260-260: Good addition of shared_yaxes for better visual coherence.Adding
shared_yaxes=Trueensures that all subplots use the same y-axis scale, making comparisons across months more accurate.
367-367: Smart offset for centering hour labels on heatmap.The -0.5 offset properly centers the hour labels on the heatmap cells, improving visualization accuracy.
465-467: Good modernization to f-strings.The conversion from
.format()to f-strings improves readability and follows modern Python best practices.
492-492: Good use of enum for unit system comparison.Using
UnitSystem.IPinstead of string literal "ip" improves type safety and maintainability.
502-522: Excellent refactoring of wind rose computation.The new implementation using pandas
cutandgroupbyoperations is more readable and efficient than the previous approach. The binning logic is clearer and the calculation is more maintainable.
569-569: Good addition of legend title for clarity.Adding the wind speed unit to the legend title improves chart readability.
485-486: Please confirm inclusive boundary change for hour filteringI wasn’t able to find any existing tests or documentation that explicitly define whether the hour filter should be inclusive or exclusive, so it’s unclear if switching from
< end_hourto<= end_hour(and similarly forstart_hour) was intentional. This change will include the end hour’s data point, adding one extra hour compared to the previous behavior.• Locations needing review:
- pages/lib/template_graphs.py, lines 485–486
- pages/lib/template_graphs.py, lines 840–843
If this inclusive behavior is desired, please:
- Add or update tests to cover the boundary conditions (e.g., data exactly at
start_hourandend_hour).- Update any relevant documentation to specify that both start and end hours are now included.
pages/summary.py (6)
31-41: Good implementation of page-based routing structure.The new
layout()function provides a clean entry point for the page with an empty container that will be populated dynamically.
47-47: Consistent use of UnitSystem enum.The comparison now uses
UnitSystem.SIinstead of string literal, maintaining consistency with the template_graphs.py changes.
231-231: Good type safety improvement for unit system check.Using
UnitSystem.SIenum instead of string literal improves type safety and maintainability.
526-526: Consistent enum usage in download path logic.The CSV download path now correctly uses
UnitSystem.SIfor branching, maintaining consistency throughout the codebase.
544-556: New EPW download functionality looks good.The implementation properly handles the EPW file download with appropriate null checks and data processing.
23-28: Proper page registration for Dash page-based routing.The page is correctly registered with appropriate name, path, and order from the config module.
pages/select.py (3)
20-25: Page registration is correct and consistent with config enums.Registering the page with PageInfo/PageUrls is clean and keeps routing centralized. LGTM.
100-106: Alert defaults look good.Copy and sizing are appropriate for first-run state.
231-242: NavLinkdisabledprop usage verified
All of the IDs used in the callback outputs ("/","/summary","/t-rh","/sun","/wind","/psy-chart","/explorer","/outdoor","/natural-ventilation") are assigned todbc.NavLinkinstances inpages/lib/layout.py, which natively support thedisabledprop. There are nodcc.Linkcomponents with these IDs in the codebase, so the callback is valid as written.
| uses: google-github-actions/setup-gcloud@v1 | ||
| with: |
There was a problem hiding this comment.
Update setup-gcloud action to v2 to satisfy actionlint and use the maintained runner.
actionlint flags google-github-actions/setup-gcloud@v1 as too old. Bump to v2.
Apply:
- - name: Set up Cloud SDK
- uses: google-github-actions/setup-gcloud@v1
+ - name: Set up Cloud SDK
+ uses: google-github-actions/setup-gcloud@v2📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| uses: google-github-actions/setup-gcloud@v1 | |
| with: | |
| - name: Set up Cloud SDK | |
| uses: google-github-actions/setup-gcloud@v2 | |
| with: |
🧰 Tools
🪛 actionlint (1.7.7)
23-23: the runner of "google-github-actions/setup-gcloud@v1" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
🤖 Prompt for AI Agents
.github/workflows/deploy.yml around lines 23 to 24: update the GitHub Action
reference from google-github-actions/setup-gcloud@v1 to
google-github-actions/setup-gcloud@v2 to satisfy actionlint and use the
maintained runner; change the uses line accordingly and verify any inputs or
output names that differ between v1 and v2 (adjust input keys and step outputs
if needed), then run a workflow lint or dry-run to confirm compatibility.
| project_id: heat-stress-scale | ||
|
|
||
| - name: Building (🏗️) | ||
| run: |- | ||
| - name: Building (🏗️) and Deploying (🚀) | ||
| run: | | ||
| gcloud builds submit \ | ||
| --tag us-docker.pkg.dev/clima-316917/gcr.io/clima | ||
|
|
||
| # Setup gcloud CLI | ||
| - name: Deploy (🚀) | ||
| uses: google-github-actions/deploy-cloudrun@v1 | ||
| with: | ||
| service: clima | ||
| image: us-docker.pkg.dev/clima-316917/gcr.io/clima | ||
| region: us-central1 | ||
| credentials: ${{ secrets.GCP_SA_KEY_JSON }} | ||
| project_id: clima-316917 No newline at end of file | ||
| --project=clima-316917 \ |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Project mismatch between setup-gcloud (heat-stress-scale) and builds submit (clima-316917).
This is likely to fail or deploy to the wrong project unless the service account has cross-project roles intentionally. Align both to the same project or remove the --project override.
Apply one of the following:
Option A: Use heat-stress-scale everywhere
- project_id: heat-stress-scale
+ project_id: heat-stress-scale
...
- --project=clima-316917 \
+ --project=heat-stress-scale \Option B: Use clima-316917 everywhere
- project_id: heat-stress-scale
+ project_id: clima-316917🏁 Script executed:
#!/bin/bash
# Display repository workflows directory to confirm file path
fd -t f .github/workflows
# Print lines 1–40 of the deploy.yml workflow for context
sed -n '1,40p' .github/workflows/deploy.ymlLength of output: 1264
.github/workflows/deploy.yml – Project ID Mismatch
The workflow currently authenticates against heat-stress-scale but then submits the build to clima-316917. This misalignment will either fail the build or deploy into the wrong GCP project unless you’ve intentionally granted the service account cross-project permissions. Align both settings to the same project.
Please choose one of the following fixes:
• Option A: Use heat-stress-scale everywhere
- name: Set up Cloud SDK
uses: google-github-actions/setup-gcloud@v1
with:
- project_id: heat-stress-scale
+ project_id: heat-stress-scale
- name: Building (🏗️) and Deploying (🚀)
run: |
gcloud builds submit \
- --project=clima-316917 \
+ --project=heat-stress-scale \
--substitutions=_REPO_NAME="clima",_PROJ_NAME="clima-316917",_IMG_NAME="main",_GCR="us.gcr.io",_REGION="us-central1",_MEMORY="4Gi",_CPU="2"• Option B: Use clima-316917 everywhere
- name: Set up Cloud SDK
uses: google-github-actions/setup-gcloud@v1
with:
- project_id: heat-stress-scale
+ project_id: clima-316917Please update to ensure both project_id and --project match.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| project_id: heat-stress-scale | |
| - name: Building (🏗️) | |
| run: |- | |
| - name: Building (🏗️) and Deploying (🚀) | |
| run: | | |
| gcloud builds submit \ | |
| --tag us-docker.pkg.dev/clima-316917/gcr.io/clima | |
| # Setup gcloud CLI | |
| - name: Deploy (🚀) | |
| uses: google-github-actions/deploy-cloudrun@v1 | |
| with: | |
| service: clima | |
| image: us-docker.pkg.dev/clima-316917/gcr.io/clima | |
| region: us-central1 | |
| credentials: ${{ secrets.GCP_SA_KEY_JSON }} | |
| project_id: clima-316917 | |
| --project=clima-316917 \ | |
| --substitutions=_REPO_NAME="clima",_PROJ_NAME="clima-316917",_IMG_NAME="main",_GCR="us.gcr.io",_REGION="us-central1",_MEMORY="4Gi",_CPU="2" | |
| - name: Set up Cloud SDK | |
| uses: google-github-actions/setup-gcloud@v1 | |
| with: | |
| project_id: heat-stress-scale | |
| - name: Building (🏗️) and Deploying (🚀) | |
| run: | | |
| gcloud builds submit \ | |
| --project=heat-stress-scale \ | |
| --substitutions=_REPO_NAME="clima",_PROJ_NAME="clima-316917",_IMG_NAME="main",_GCR="us.gcr.io",_REGION="us-central1",_MEMORY="4Gi",_CPU="2" |
| project_id: heat-stress-scale | |
| - name: Building (🏗️) | |
| run: |- | |
| - name: Building (🏗️) and Deploying (🚀) | |
| run: | | |
| gcloud builds submit \ | |
| --tag us-docker.pkg.dev/clima-316917/gcr.io/clima | |
| # Setup gcloud CLI | |
| - name: Deploy (🚀) | |
| uses: google-github-actions/deploy-cloudrun@v1 | |
| with: | |
| service: clima | |
| image: us-docker.pkg.dev/clima-316917/gcr.io/clima | |
| region: us-central1 | |
| credentials: ${{ secrets.GCP_SA_KEY_JSON }} | |
| project_id: clima-316917 | |
| --project=clima-316917 \ | |
| --substitutions=_REPO_NAME="clima",_PROJ_NAME="clima-316917",_IMG_NAME="main",_GCR="us.gcr.io",_REGION="us-central1",_MEMORY="4Gi",_CPU="2" | |
| - name: Set up Cloud SDK | |
| uses: google-github-actions/setup-gcloud@v1 | |
| with: | |
| project_id: clima-316917 | |
| - name: Building (🏗️) and Deploying (🚀) | |
| run: | | |
| gcloud builds submit \ | |
| --project=clima-316917 \ | |
| --substitutions=_REPO_NAME="clima",_PROJ_NAME="clima-316917",_IMG_NAME="main",_GCR="us.gcr.io",_REGION="us-central1",_MEMORY="4Gi",_CPU="2" |
🤖 Prompt for AI Agents
In .github/workflows/deploy.yml around lines 25 to 31, the workflow sets
project_id to "heat-stress-scale" but the gcloud builds submit uses
--project=clima-316917, causing a project mismatch; pick one project and make
both values identical. Update the file so the top-level project_id and the
--project substitution in the gcloud command use the same project identifier
(either change project_id to "clima-316917" or change --project to
"heat-stress-scale") and ensure any other substitutions (_PROJ_NAME, etc.)
reflect that same chosen project.
| The **Sun and Clouds** tab presents an overview of various climatic factors that relate to sun, solar position, intensity, and cloud cover, in particular: | ||
|
|
||
| * [Apparent sunpath for the location (spherical and cartesian projection)](broken-reference) | ||
| * [Apparent sunpath for the location (spherical and cartesian projection)](broken-reference/) | ||
| * [Global and Diffuse Horizontal Solar Radiation](global-and-diffuse-horizontal-solar-radiation/) |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Standardize terminology and fix unresolved link
- Prefer “sun path” (two words) to match the H2 below.
- Capitalize “Cartesian.”
- The target is currently “broken-reference/”. Please point this to the actual section/page.
-* [Apparent sunpath for the location (spherical and cartesian projection)](broken-reference/)
+* [Apparent sun path for the location (spherical and Cartesian projection)](broken-reference/)Run this to locate the correct doc/anchor to link:
🏁 Script executed:
#!/bin/bash
# Search for the "Apparent sun path" section or file to replace the broken link.
rg -n -C2 -i 'apparent\s+sun\s*path|sunpath' docs | sed -n '1,200p'Length of output: 5744
Fix “Apparent sun path” terminology and resolve broken link
Please update the first bullet in docs/documentation/tabs-explained/sun-and-cloud/README.md to use “sun path” (two words), capitalize “Cartesian,” and point to the internal section anchor:
• File: docs/documentation/tabs-explained/sun-and-cloud/README.md (line 5)
--- a/docs/documentation/tabs-explained/sun-and-cloud/README.md
+++ b/docs/documentation/tabs-explained/sun-and-cloud/README.md
@@ -5,7 +5,7 @@ The **Sun and Clouds** tab presents an overview of various climatic factors tha
-* [Apparent sunpath for the location (spherical and cartesian projection)](broken-reference/)
+* [Apparent sun path for the location (spherical and Cartesian projection)](#apparent-sun-path-for-the-location)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| The **Sun and Clouds** tab presents an overview of various climatic factors that relate to sun, solar position, intensity, and cloud cover, in particular: | |
| * [Apparent sunpath for the location (spherical and cartesian projection)](broken-reference) | |
| * [Apparent sunpath for the location (spherical and cartesian projection)](broken-reference/) | |
| * [Global and Diffuse Horizontal Solar Radiation](global-and-diffuse-horizontal-solar-radiation/) | |
| The **Sun and Clouds** tab presents an overview of various climatic factors that relate to sun, solar position, intensity, and cloud cover, in particular: | |
| * [Apparent sun path for the location (spherical and Cartesian projection)](#apparent-sun-path-for-the-location) | |
| * [Global and Diffuse Horizontal Solar Radiation](global-and-diffuse-horizontal-solar-radiation/) |
🧰 Tools
🪛 LanguageTool
[grammar] ~5-~5: There might be a mistake here.
Context: ...ion (spherical and cartesian projection)](broken-reference/) * [Global and Diffuse Horizontal Solar Radi...
(QB_NEW_EN)
[grammar] ~6-~6: There might be a mistake here.
Context: ... * Global and Diffuse Horizontal Solar Radiation * Cloud coverage * [Cu...
(QB_NEW_EN)
🤖 Prompt for AI Agents
In docs/documentation/tabs-explained/sun-and-cloud/README.md around lines 3 to
6, update the first bullet so the phrase reads "Apparent sun path for the
location (spherical and Cartesian projection)" (two words for "sun path" and
capitalize "Cartesian") and replace the broken external link with the correct
internal section anchor link to the corresponding heading in this README (use
the markdown fragment for the "Apparent sun path..." section so the bullet
points to that internal anchor).
| fig = px.scatter_mapbox( | ||
| df.head(2585), | ||
| lat="lat", | ||
| lon="lon", | ||
| hover_name="properties.title", | ||
| color_discrete_sequence=["#3a0ca3"], | ||
| hover_data=["Source"], | ||
| zoom=2, | ||
| height=500, | ||
| ) |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Set custom_data for URL, avoid magic numbers, and simplify hover_name.
- Add custom_data=["Source"] so clickData contains the URL.
- Replace hover_name series with column name for clarity.
- Extract “2585” into a named constant for readability.
+ MAX_POINTS = 2585 # limit initial marker count for performance
@@
- fig = px.scatter_mapbox(
- df.head(2585),
+ fig = px.scatter_mapbox(
+ df.head(MAX_POINTS),
lat="lat",
lon="lon",
hover_name="properties.title",
color_discrete_sequence=["#3a0ca3"],
- hover_data=["Source"],
+ hover_data=["Source"],
+ custom_data=["Source"],
zoom=2,
height=500,
)
@@
- fig2 = px.scatter_mapbox(
+ fig2 = px.scatter_mapbox(
df_one_building,
lat="lat",
lon="lon",
- hover_name=df_one_building["name"],
+ hover_name="name",
color_discrete_sequence=["#4895ef"],
hover_data=[
"period",
"elevation (m)",
"time zone (GMT)",
"99% Heating DB",
"1% Cooling DB ",
"Source",
],
+ custom_data=["Source"],
zoom=2,
height=500,
)Also applies to: 348-364
🤖 Prompt for AI Agents
In pages/select.py around lines 336 to 345 (and similarly lines 348 to 364), the
scatter_mapbox call uses a magic number 2585, sets hover_name to a series
expression "properties.title", and doesn't set custom_data so clickData won't
include the URL; define a named constant (e.g., MAX_POINTS = 2585) and use
df.head(MAX_POINTS), change hover_name to the column name (e.g., "title")
instead of "properties.title", and add custom_data=["Source"] to the
px.scatter_mapbox args so the Source/URL is included in clickData.
Summary by CodeRabbit
New Features
Improvements
Documentation
Tests
Chores